aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
...
* docs: Update TOAST storage docs for configurable compression.Robert Haas2021-04-13
| | | | | | | | | | | | | | | | | | Mention that there are multiple TOAST compression methods and that the compression method used is stored in a TOAST pointer along with the other information that was stored there previously. Add a reference to the documentation for default_toast_compression, where the supported methods are listed, instead of duplicating that here. I haven't tried to preserve the text claiming that pglz is "fairly simple and very fast." I have no view on the veracity of the former claim, but LZ4 seems to be faster (and to compress better) so it seems better not to muddy the waters by talking about compression speed as a strong point of PGLZ. Patch by me, reviewed by Justin Pryzby. Discussion: http://postgr.es/m/CA+Tgmoaw_YBwQhOS_hhEPPwFhfAnu+VCLs18EfGr9gQw1z4H-w@mail.gmail.com
* Fix some inappropriately-disallowed uses of ALTER ROLE/DATABASE SET.Tom Lane2021-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most GUC check hooks that inspect database state have special checks that prevent them from throwing hard errors for state-dependent issues when source == PGC_S_TEST. This allows, for example, "ALTER DATABASE d SET default_text_search_config = foo" when the "foo" configuration hasn't been created yet. Without this, we have problems during dump/reload or pg_upgrade, because pg_dump has no idea about possible dependencies of GUC values and can't ensure a safe restore ordering. However, check_role() and check_session_authorization() hadn't gotten the memo about that, and would throw hard errors anyway. It's not entirely clear what is the use-case for "ALTER ROLE x SET role = y", but we've now heard two independent complaints about that bollixing an upgrade, so apparently some people are doing it. Hence, fix these two functions to act more like other check hooks with similar needs. (But I did not change their insistence on being inside a transaction, as it's still not apparent that setting either GUC from the configuration file would be wise.) Also fix check_temp_buffers, which had a different form of the disease of making state-dependent checks without any exception for PGC_S_TEST. A cursory survey of other GUC check hooks did not find any more issues of this ilk. (There are a lot of interdependencies among PGC_POSTMASTER and PGC_SIGHUP GUCs, which may be a bad idea, but they're not relevant to the immediate concern because they can't be set via ALTER ROLE/DATABASE.) Per reports from Charlie Hornsby and Nathan Bossart. Back-patch to all supported branches. Discussion: https://postgr.es/m/HE1P189MB0523B31598B0C772C908088DB7709@HE1P189MB0523.EURP189.PROD.OUTLOOK.COM Discussion: https://postgr.es/m/20160711223641.1426.86096@wrigleys.postgresql.org
* Redesign the caching done by get_cached_rowtype().Tom Lane2021-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, get_cached_rowtype() cached a pointer to a reference-counted tuple descriptor from the typcache, relying on the ExprContextCallback mechanism to release the tupdesc refcount when the expression tree using the tupdesc was destroyed. This worked fine when it was designed, but the introduction of within-DO-block COMMITs broke it. The refcount is logged in a transaction-lifespan resource owner, but plpgsql won't destroy simple expressions made within the DO block (before its first commit) until the DO block is exited. That results in a warning about a leaked tupdesc refcount when the COMMIT destroys the original resource owner, and then an error about the active resource owner not holding a matching refcount when the expression is destroyed. To fix, get rid of the need to have a shutdown callback at all, by instead caching a pointer to the relevant typcache entry. Those survive for the life of the backend, so we needn't worry about the pointer becoming stale. (For registered RECORD types, we can still cache a pointer to the tupdesc, knowing that it won't change for the life of the backend.) This mechanism has been in use in plpgsql and expandedrecord.c since commit 4b93f5799, and seems to work well. This change requires modifying the ExprEvalStep structs used by the relevant expression step types, which is slightly worrisome for back-patching. However, there seems no good reason for extensions to be familiar with the details of these particular sub-structs. Per report from Rohit Bhogate. Back-patch to v11 where within-DO-block COMMITs became a thing. Discussion: https://postgr.es/m/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com
* Avoid improbable PANIC during heap_update.Tom Lane2021-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | heap_update needs to clear any existing "all visible" flag on the old tuple's page (and on the new page too, if different). Per coding rules, to do this it must acquire pin on the appropriate visibility-map page while not holding exclusive buffer lock; which creates a race condition since someone else could set the flag whenever we're not holding the buffer lock. The code is supposed to handle that by re-checking the flag after acquiring buffer lock and retrying if it became set. However, one code path through heap_update itself, as well as one in its subroutine RelationGetBufferForTuple, failed to do this. The end result, in the unlikely event that a concurrent VACUUM did set the flag while we're transiently not holding lock, is a non-recurring "PANIC: wrong buffer passed to visibilitymap_clear" failure. This has been seen a few times in the buildfarm since recent VACUUM changes that added code paths that could set the all-visible flag while holding only exclusive buffer lock. Previously, the flag was (usually?) set only after doing LockBufferForCleanup, which would insist on buffer pin count zero, thus preventing the flag from becoming set partway through heap_update. However, it's clear that it's heap_update not VACUUM that's at fault here. What's less clear is whether there is any hazard from these bugs in released branches. heap_update is certainly violating API expectations, but if there is no code path that can set all-visible without a cleanup lock then it's only a latent bug. That's not 100% certain though, besides which we should worry about extensions or future back-patch fixes that could introduce such code paths. I chose to back-patch to v12. Fixing RelationGetBufferForTuple before that would require also back-patching portions of older fixes (notably 0d1fe9f74), which is more code churn than seems prudent to fix a hypothetical issue. Discussion: https://postgr.es/m/2247102.1618008027@sss.pgh.pa.us
* doc: Fix typo in logicaldecoding.sgml.Fujii Masao2021-04-13
| | | | | | | Introduced in commit 0aa8a01d04. Author: Peter Smith Discussion: https://postgr.es/m/CAHut+Ps8rkVHKWyjg09Fb1PaVG5-EhoFTFJ9OZMF4HPzDSXfew@mail.gmail.com
* Use "-I." in directories holding Bison parsers, for Oracle compilers.Noah Misch2021-04-12
| | | | | | | | | With the Oracle Developer Studio 12.6 compiler, #line directives alter the current source file location for purposes of #include "..." directives. Hence, a VPATH build failed with 'cannot find include file: "specscanner.c"'. With two exceptions, parser-containing directories already add "-I. -I$(srcdir)"; eliminate the exceptions. Back-patch to 9.6 (all supported versions).
* Port regress-python3-mangle.mk to Solaris "sed".Noah Misch2021-04-12
| | | | | It doesn't support "\(foo\)*" like a POSIX "sed" implementation does; see the Autoconf manual. Back-patch to 9.6 (all supported versions).
* Fix potential SSI hazard in heap_update().Thomas Munro2021-04-13
| | | | | | | | | | | | | | | | | Commit 6f38d4dac38 failed to heed a warning about the stability of the value pointed to by "otid". The caller is allowed to pass in a pointer to newtup->t_self, which will be updated during the execution of the function. Instead, the SSI check should use the value we copy into oldtup.t_self near the top of the function. Not a live bug, because newtup->t_self doesn't really get updated until a bit later, but it was confusing and broke the rule established by the comment. Back-patch to 13. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/2689164.1618160085%40sss.pgh.pa.us
* Remove duplicated --no-sync switches in new tests of test_pg_dumpMichael Paquier2021-04-13
| | | | | | | These got introduced in 6568cef. Reported-by: Noah Misch Discussion: https://postgr.es/m/20210404220802.GA728316@rfd.leadboat.com
* Remove no-longer-relevant test case.Tom Lane2021-04-12
| | | | | | | | | | | | | | collate.icu.utf8.sql was exercising the recording of a collation dependency for an enum comparison expression, but such an expression should never have had any collation dependency in the first place. After I fixed that in commit c402b02b9, the test started failing. We don't need to test that scenario anymore, so just remove the now-useless test steps. (This test case is new in HEAD, so no need to back-patch.) Discussion: https://postgr.es/m/3044030.1618261159@sss.pgh.pa.us Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
* Fix old bug with coercing the result of a COLLATE expression.Tom Lane2021-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | There are hacks in parse_coerce.c to push down a requested coercion to below any CollateExpr that may appear. However, we did that even if the requested data type is non-collatable, leading to an invalid expression tree in which CollateExpr is applied to a non-collatable type. The fix is just to drop the CollateExpr altogether, reasoning that it's useless. This bug is ten years old, dating to the original addition of COLLATE support. The lack of field complaints suggests that there aren't a lot of user-visible consequences. We noticed the problem because it would trigger an assertion in DefineVirtualRelation if the invalid structure appears as an output column of a view; however, in a non-assert build, you don't see a crash just a (subtly incorrect) complaint about applying collation to a non-collatable type. I found that by putting the incorrect structure further down in a view, I could make a view definition that would fail dump/reload, per the added regression test case. But CollateExpr doesn't do anything at run-time, so this likely doesn't lead to any really exciting consequences. Per report from Yulin Pei. Back-patch to all supported branches. Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
* pg_upgrade: Print OID using %u instead of %dPeter Eisentraut2021-04-12
| | | | | This could write wrong output into the cluster deletion script if a database OID exceeds the signed 32-bit range.
* pg_amcheck: Add basic NLS supportPeter Eisentraut2021-04-12
|
* Fix files references in nls.mkPeter Eisentraut2021-04-12
| | | | broken by 37d2ff38031262a1778bc76a9c55fff7afbcf275
* Support tab-complete for TRUNCATE on foreign tables.Fujii Masao2021-04-12
| | | | | | | | | | | | | | Commit 8ff1c94649 extended TRUNCATE command so that it can also truncate foreign tables. But it forgot to support tab-complete for TRUNCATE on foreign tables. That is, previously tab-complete for TRUNCATE displayed only the names of regular tables. This commit improves tab-complete for TRUNCATE so that it displays also the names of foreign tables. Author: Fujii Masao Reviewed-by: Bharath Rupireddy Discussion: https://postgr.es/m/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com
* Move log_autovacuum_min_duration into its correct sectionsMichael Paquier2021-04-12
| | | | | | | | | This GUC has already been classified as LOGGING_WHAT, but its location in postgresql.conf.sample and the documentation did not reflect that, so fix those inconsistencies. Author: Justin Pryzby Discussion: https://postgr.es/m/20210404012546.GK6592@telsasoft.com
* doc: Update information of new messages for logical replication.Amit Kapila2021-04-12
| | | | | | | | | | | Updated documentation for new messages added for streaming of in-progress transactions, as well as changes made to the existing messages. It also updates the information of protocol versions supported for logical replication. Author: Ajin Cherian Reviewed-by: Amit Kapila, Peter Smith, Euler Taveira Discussion: https://postgr.es/m/CAFPTHDYHN9m=MZZct-B=BYg_TETvv+kXvL9RD2DpaBS5pGxGYg@mail.gmail.com
* Fix out-of-bound memory access for interval -> char conversionMichael Paquier2021-04-12
| | | | | | | | | | | | | | | | | | | | Using Roman numbers (via "RM" or "rm") for a conversion to calculate a number of months has never considered the case of negative numbers, where a conversion could easily cause out-of-bound memory accesses. The conversions in themselves were not completely consistent either, as specifying 12 would result in NULL, but it should mean XII. This commit reworks the conversion calculation to have a more consistent behavior: - If the number of months and years is 0, return NULL. - If the number of months is positive, return the exact month number. - If the number of months is negative, do a backward calculation, with -1 meaning December, -2 November, etc. Reported-by: Theodor Arsenij Larionov-Trichkin Author: Julien Rouhaud Discussion: https://postgr.es/m/16953-f255a18f8c51f1d5@postgresql.org backpatch-through: 9.6
* Silence some Coverity warnings and improve code consistency.Tom Lane2021-04-11
| | | | | | | | | | | | | | | | Coverity complained about possible overflow in expressions like intresult = tm->tm_sec * 1000000 + fsec; on the grounds that the multiplication would happen in 32-bit arithmetic before widening to the int64 result. I think these are all false positives because of the limited possible range of tm_sec; but nonetheless it seems silly to spell it like that when nearby lines have the identical computation written with a 64-bit constant. ... or more accurately, with an LL constant, which is not project style. Make all of these use INT64CONST(), as we do elsewhere. This is all new code from a2da77cdb, so no need for back-patch.
* Add macro PGWARNING, and make PGERROR available on all platforms.Tom Lane2021-04-11
| | | | | | | | | | | | | We'd previously noted the need for coping with Windows headers that provide some other definition of macro "ERROR" than elog.h does. It turns out that R also wants to define ERROR, and WARNING too. PL/R has been working around this in a hacky way that broke when we recently changed the numeric value of ERROR. To let them have a more future-proof solution, provide an alternate macro PGWARNING for WARNING, and make PGERROR visible always, not only when #ifdef WIN32. Discussion: https://postgr.es/m/CADK3HHK6iMChd1yoOqssxBn5Z14Zar8Ztr3G-N_fuG7F8YTP3w@mail.gmail.com
* Fix uninitialized variable from commit a4d75c86b.Tom Lane2021-04-11
| | | | | | | | | The path for *exprs != NIL would misbehave, and likely crash, since pull_varattnos expects its last argument to be valid at call. Found by Coverity --- we have no coverage of this path in the regression tests.
* Avoid unnecessary table open/close in TRUNCATE command.Fujii Masao2021-04-12
| | | | | | | | | | | | | | | | | | | | | ExecuteTruncate() filters out the duplicate tables specified in the TRUNCATE command, for example in the case where "TRUNCATE foo, foo" is executed. Such duplicate tables obviously don't need to be opened and closed because they are skipped. But previously it always opened the tables before checking whether they were duplicated ones or not, and then closed them if they were. That is, the duplicated tables were opened and closed unnecessarily. This commit changes ExecuteTruncate() so that it opens the table after it confirms that table is not duplicated one, which leads to avoid unnecessary table open/close. Do not back-patch because such unnecessary table open/close is not a bug though it exists in older versions. Author: Bharath Rupireddy Reviewed-by: Amul Sul, Fujii Masao Discussion: https://postgr.es/m/CALj2ACUdBO_sXJTa08OZ0YT0qk7F_gAmRa9hT4dxRcgPS4nsZA@mail.gmail.com
* Remove COMMIT_TS_SETTS record.Fujii Masao2021-04-12
| | | | | | | | | | | | | | | | | | | Commit 438fc4a39c prevented the WAL replay from writing COMMIT_TS_SETTS record. By this change there is no code that generates COMMIT_TS_SETTS record in PostgreSQL core. Also we can think that there are no extensions using the record because we've not received so far any complaints about the issue that commit 438fc4a39c fixed. Therefore this commit removes COMMIT_TS_SETTS record and its related code. Even without this record, the timestamp required for commit timestamp feature can be acquired from the COMMIT record. Bump WAL page magic. Reported-by: lx zou <zoulx1982@163.com> Author: Fujii Masao Reviewed-by: Alvaro Herrera Discussion: https://postgr.es/m/16931-620d0f2fdc6108f1@postgresql.org
* Standardize pg_authid oid_symbol values.Noah Misch2021-04-10
| | | | | | Commit c9c41c7a337d3e2deb0b2a193e9ecfb865d8f52b used two different naming patterns. Standardize on the majority pattern, which was the only pattern in the last reviewed version of that commit.
* Improve behavior of date_bin with origin in the futurePeter Eisentraut2021-04-10
| | | | | | | | | | Currently, when the origin is after the input, the result is the timestamp at the end of the bin, rather than the beginning as expected. This puts the result consistently at the beginning of the bin. Author: John Naylor <john.naylor@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/CAFBsxsGjLDxQofRfH+d4KSAXxPf3MMevUG7s6EDfdBOvHLDLjw@mail.gmail.com
* Fix failure of xlogprefetch.h to include all prerequisite headers.Tom Lane2021-04-10
| | | | Per cpluspluscheck.
* Doc: update documentation of check_function_bodies.Tom Lane2021-04-10
| | | | | | | | | | Adjust docs and description string to note that check_function_bodies applies to procedures too. (In hindsight it should have been named check_routine_bodies, but it seems too late for that now.) Daniel Westermann Discussion: https://postgr.es/m/GV0P278MB04834A9EB9A74B036DC7CE49D2739@GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM
* Improve slightly misleading comments in nodeFuncs.cDavid Rowley2021-04-10
| | | | | | | | | | | | | | | | There were some comments in nodeFuncs.c that, depending on your interpretation of the word "result", could lead you to believe that the comments were badly copied and pasted from somewhere else. If you thought of "result" as the return value of the function that the comment is written in, then you'd be misled. However, if you'd correctly interpreted "result" to mean the result type of the given node type, you'd not have seen any issues. Here we do a small cleanup to try to prevent any future misinterpretations. Per wording suggestion from Tom Lane. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAApHDvp+Bw=2Qiu5=uXMKfC7gd0+B=4JvexVgGJU=am2g9a1CA@mail.gmail.com
* doc: Fix man page whitespace issuesPeter Eisentraut2021-04-09
| | | | | | Whitespace between tags is significant, and in some cases it creates extra vertical space in man pages. The fix is to remove some newlines in the markup.
* Suppress length of Notice/Error msgs in PQtrace regress modeAlvaro Herrera2021-04-09
| | | | | | | | | | | | | | A (relatively minor) annoyance of ErrorResponse/NoticeResponse messages as printed by PQtrace() is that their length might vary when we move error messages from one source file to another, one function to another, or even when their location line numbers change number of digits. To avoid having to adjust expected files for some tests, make the regress mode of PQtrace() suppress the length word of NoticeResponse and ErrorResponse messages. Discussion: https://postgr.es/m/20210402023010.GA13563@alvherre.pgsql Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Make new GUC short descriptions more consistent.Thomas Munro2021-04-10
| | | | | Reported-by: Daniel Westermann (DWE) <daniel.westermann@dbi-services.com> Discussion: https://postgr.es/m/GV0P278MB0483490FEAC879DCA5ED583DD2739%40GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM
* Doc: Review for "Optionally prefetch referenced data in recovery."Thomas Munro2021-04-10
| | | | | | | | Typos, corrections and language improvements in the docs, and a few in code comments too. Reported-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20210409033703.GP6592%40telsasoft.com
* doc: Additional documentation for date_binPeter Eisentraut2021-04-09
| | | | | | Reported-by: Justin Pryzby <pryzby@telsasoft.com> Author: John Naylor <john.naylor@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/CAFBsxsEEm1nuhZmfVQxvu_i3nDDEuvNJ_WMrDo9whFD_jusp-A@mail.gmail.com
* Document ANALYZE storage parameters for partitioned tablesAlvaro Herrera2021-04-09
| | | | | | | Commit 0827e8af70f4 added parameters for autovacuum to support partitioned tables, but didn't add any docs. Add them. Discussion: https://postgr.es/m/20210408213051.GL6592@telsasoft.com
* Set pg_class.reltuples for partitioned tablesAlvaro Herrera2021-04-09
| | | | | | | | | | | | | | | | | | | | | When commit 0827e8af70f4 added auto-analyze support for partitioned tables, it included code to obtain reltuples for the partitioned table as a number of catalog accesses to read pg_class.reltuples for each partition. That's not only very inefficient, but also problematic because autovacuum doesn't hold any locks on any of those tables -- and doesn't want to. Replace that code with a read of pg_class.reltuples for the partitioned table, and make sure ANALYZE and TRUNCATE properly maintain that value. I found no code that would be affected by the change of relpages from zero to non-zero for partitioned tables, and no other code that should be maintaining it, but if there is, hopefully it'll be an easy fix. Per buildfarm. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Discussion: https://postgr.es/m/1823909.1617862590@sss.pgh.pa.us
* Fix typoMagnus Hagander2021-04-09
| | | | | | Author: Daniel Westermann Backpatch-through: 9.6 Discussion: https://postgr.es/m/GV0P278MB0483A7AA85BAFCC06D90F453D2739@GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM
* Fix typos and grammar in documentation and code commentsMichael Paquier2021-04-09
| | | | | | | | | Comment fixes are applied on HEAD, and documentation improvements are applied on back-branches where needed. Author: Justin Pryzby Discussion: https://postgr.es/m/20210408164008.GJ6592@telsasoft.com Backpatch-through: 9.6
* Silence another _bt_check_unique compiler warning.Peter Geoghegan2021-04-08
| | | | | | Per complaint from Tom Lane Discussion: https://postgr.es/m/1922884.1617909599@sss.pgh.pa.us
* Add support for tab-completion of type arguments in \df, \do.Tom Lane2021-04-08
| | | | Oversight in commit a3027e1e7.
* Suppress uninitialized-variable warning.Tom Lane2021-04-08
| | | | | | Several buildfarm critters that don't usually produce such warnings are complaining about e717a9a18. I think it's actually safe, but move initialization to silence the warning.
* Fixes for query_id featureBruce Momjian2021-04-08
| | | | | | | | | | | | | | | | | | Ignore parallel workers in pg_stat_statements Oversight in 4f0b0966c8 which exposed queryid in parallel workers. Counters are aggregated by the main backend process so parallel workers would report duplicated activity, and could also report activity for the wrong entry as they are only aware of the top level queryid. Fix thinko in pg_stat_get_activity when retrieving the queryid. Remove unnecessary call to pgstat_report_queryid(). Reported-by: Amit Kapila, Andres Freund, Thomas Munro Discussion: https://postgr.es/m/20210408051735.lfbdzun5zdlax5gd@alap3.anarazel.de p634GTSOqnDW86Owrn6qDAVosC5dJjXjp7BMfc5Gz1Q@mail.gmail.com Author: Julien Rouhaud
* Merge v1.10 of pg_stat_statements into v1.9Magnus Hagander2021-04-08
| | | | | | | | v1.9 is already new in this version of PostgreSQL, so turn it into just one change. Author: Julien Rohaud Discussion: https://postgr.es/m/20210408120505.7zinijtdexbyghvb@nol
* Remove duplicate typedef.Thomas Munro2021-04-09
| | | | | Thinko in commit 323cbe7c, per complaint from BF animal locust's older GCC compiler.
* Allow TRUNCATE command to truncate foreign tables.Fujii Masao2021-04-08
| | | | | | | | | | | | | | | | | | | | | | | This commit introduces new foreign data wrapper API for TRUNCATE. It extends TRUNCATE command so that it accepts foreign tables as the targets to truncate and invokes that API. Also it extends postgres_fdw so that it can issue TRUNCATE command to foreign servers, by adding new routine for that TRUNCATE API. The information about options specified in TRUNCATE command, e.g., ONLY, CACADE, etc is passed to FDW via API. The list of foreign tables to truncate is also passed to FDW. FDW truncates the foreign data sources that the passed foreign tables specify, based on those information. For example, postgres_fdw constructs TRUNCATE command using them and issues it to the foreign server. For performance, TRUNCATE command invokes the FDW routine for TRUNCATE once per foreign server that foreign tables to truncate belong to. Author: Kazutaka Onishi, Kohei KaiGai, slightly modified by Fujii Masao Reviewed-by: Bharath Rupireddy, Michael Paquier, Zhihong Yu, Alvaro Herrera, Stephen Frost, Ashutosh Bapat, Amit Langote, Daniel Gustafsson, Ibrar Ahmed, Fujii Masao Discussion: https://postgr.es/m/CAOP8fzb_gkReLput7OvOK+8NHgw-RKqNv59vem7=524krQTcWA@mail.gmail.com Discussion: https://postgr.es/m/CAJuF6cMWDDqU-vn_knZgma+2GMaout68YUgn1uyDnexRhqqM5Q@mail.gmail.com
* Speedup ScalarArrayOpExpr evaluationDavid Rowley2021-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ScalarArrayOpExprs with "useOr=true" and a set of Consts on the righthand side have traditionally been evaluated by using a linear search over the array. When these arrays contain large numbers of elements then this linear search could become a significant part of execution time. Here we add a new method of evaluating ScalarArrayOpExpr expressions to allow them to be evaluated by first building a hash table containing each element, then on subsequent evaluations, we just probe that hash table to determine if there is a match. The planner is in charge of determining when this optimization is possible and it enables it by setting hashfuncid in the ScalarArrayOpExpr. The executor will only perform the hash table evaluation when the hashfuncid is set. This means that not all cases are optimized. For example CHECK constraints containing an IN clause won't go through the planner, so won't get the hashfuncid set. We could maybe do something about that at some later date. The reason we're not doing it now is from fear that we may slow down cases where the expression is evaluated only once. Those cases can be common, for example, a single row INSERT to a table with a CHECK constraint containing an IN clause. In the planner, we enable this when there are suitable hash functions for the ScalarArrayOpExpr's operator and only when there is at least MIN_ARRAY_SIZE_FOR_HASHED_SAOP elements in the array. The threshold is currently set to 9. Author: James Coleman, David Rowley Reviewed-by: David Rowley, Tomas Vondra, Heikki Linnakangas Discussion: https://postgr.es/m/CAAaqYe8x62+=wn0zvNKCj55tPpg-JBHzhZFFc6ANovdqFw7-dA@mail.gmail.com
* Optionally prefetch referenced data in recovery.Thomas Munro2021-04-08
| | | | | | | | | | | | | | | | | | | | | | | | Introduce a new GUC recovery_prefetch, disabled by default. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Better mechanisms will follow in later work on the I/O subsystem. The GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os we allow ourselves to initiate, based on pessimistic heuristics used to infer that I/Os have begun and completed. The GUC wal_decode_buffer_size is used to limit the maximum distance we are prepared to read ahead in the WAL to find uncached blocks. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (parts) Reviewed-by: Andres Freund <andres@anarazel.de> (parts) Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (parts) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
* Add circular WAL decoding buffer.Thomas Munro2021-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | Teach xlogreader.c to decode its output into a circular buffer, to support optimizations based on looking ahead. * XLogReadRecord() works as before, consuming records one by one, and allowing them to be examined via the traditional XLogRecGetXXX() macros. * An alternative new interface XLogNextRecord() is added that returns pointers to DecodedXLogRecord structs that can be examined directly. * XLogReadAhead() provides a second cursor that lets you see further ahead, as long as data is available and there is enough space in the decoding buffer. This returns DecodedXLogRecord pointers to the caller, but also adds them to a queue of records that will later be consumed by XLogNextRecord()/XLogReadRecord(). The buffer's size is controlled with wal_decode_buffer_size. The buffer could potentially be placed into shared memory, for future projects. Large records that don't fit in the circular buffer are called "oversized" and allocated separately with palloc(). Discussion: https://postgr.es/m/CA+hUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq=AovOddfHpA@mail.gmail.com
* Remove read_page callback from XLogReader.Thomas Munro2021-04-08
| | | | | | | | | | | | | | | | | | | | | Previously, the XLogReader module would fetch new input data using a callback function. Redesign the interface so that it tells the caller to insert more data with a special return value instead. This API suits later patches for prefetching, encryption and maybe other future projects that would otherwise require continually extending the callback interface. As incidental cleanup work, move global variables readOff, readLen and readSegNo inside XlogReaderState. Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> Author: Heikki Linnakangas <hlinnaka@iki.fi> (parts of earlier version) Reviewed-by: Antonin Houska <ah@cybertec.at> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Takashi Menjo <takashi.menjo@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/20190418.210257.43726183.horiguchi.kyotaro%40lab.ntt.co.jp
* Cleanup partition pruning step generationDavid Rowley2021-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | There was some code in gen_prune_steps_from_opexps that needlessly checked a list was not empty when it clearly had to contain at least one item. This prompted a further cleanup operation in partprune.c. Additionally, the previous code could end up adding additional needless INTERSECT steps. However, those do not appear to be able to cause any misbehavior. gen_prune_steps_from_opexps is now no longer in charge of generating combine pruning steps. Instead, gen_partprune_steps_internal, which already does some combine step creation has been given the sole responsibility of generating all combine steps. This means that when we recursively call gen_partprune_steps_internal, since it always now adds a combine step when it produces multiple steps, we can just pay attention to the final step returned. In passing, do quite a bit of work on the comments to try to more clearly explain the role of both gen_partprune_steps_internal and gen_prune_steps_from_opexps. This is fairly complex code so some extra effort to give any new readers an overview of how things work seems like a good idea. Author: Amit Langote Reported-by: Andy Fan Reviewed-by: Kyotaro Horiguchi, Andy Fan, Ryan Lambert, David Rowley Discussion: https://postgr.es/m/CAKU4AWqWoVii+bRTeBQmeVW+PznkdO8DfbwqNsu9Gj4ubt9A6w@mail.gmail.com
* Add ORDER BY to some regression test queriesPeter Eisentraut2021-04-08
| | | | | | | Apparently, an unrelated patch introduced some variation on the build farm. Reported-by: Magnus Hagander <magnus@hagander.net>