aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Avoid fetching from an already-terminated plan.Tom Lane2021-09-09
| | | | | | | | | | | | | | | Some plan node types don't react well to being called again after they've already returned NULL. PortalRunSelect() has long dealt with this by calling the executor with NoMovementScanDirection if it sees that we've already run the portal to the end. However, commit ba2c6d6ce overlooked this point, so that persisting an already-fully-fetched cursor would fail if it had such a plan. Per report from Tomas Barton. Back-patch to v11, as the faulty commit was. (I've omitted a test case because the type of plan that causes a problem isn't all that stable.) Discussion: https://postgr.es/m/CAPV2KRjd=ErgVGbvO2Ty20tKTEZZr6cYsYLxgN_W3eAo9pf5sw@mail.gmail.com
* pgbench: Stop counting skipped transactions as soon as timer is exceeded.Fujii Masao2021-09-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When throttling is used, transactions that lag behind schedule by more than the latency limit are counted and reported as skipped. Previously, there was the case where pgbench counted all skipped transactions even if the timer specified in -T option was exceeded. This could take a very long time to do that especially when unrealistically high rate setting in -R option caused quite a lot of transactions that lagged behind schedule. This could prevent pgbench from ending immediately, and so pgbench could look like it got stuck to users. To fix the issue, this commit changes pgbench so that it stops counting skipped transactions as soon as the timer is exceeded. The timer can make pgbench end soon even when there are lots of skipped transactions that have not been counted yet. Note that there is no guarantee that all skipped transactions are counted under -T though there is under -t. This is OK in practice because it's very unlikely to happen with realistic setting. Also this is not the issue that this commit newly introdues. There used to be the case where pgbench ended without counting all skipped transactions since before. Back-patch to v14. Per discussion, we decided not to bother back-patch to the stable branches because it's hard to imagine the issue happens in practice (with realistic setting). Author: Yugo Nagata, Fabien COELHO Reviewed-by: Greg Sabino Mullane, Fujii Masao Discussion: https://postgr.es/m/20210613040151.265ff59d832f835bbcf8b3ba@sraoss.co.jp
* Check for relation length overrun soon enough.Tom Lane2021-09-09
| | | | | | | | | | | | | | | | | We don't allow relations to exceed 2^32-1 blocks, because block numbers are 32 bits and the last possible block number is reserved to mean InvalidBlockNumber. There is a check for this in mdextend, but that's really way too late, because the smgr API requires us to create a buffer for the block-to-be-added, and we do not want to have any buffer with blocknum InvalidBlockNumber. (Such a case can trigger assertions in bufmgr.c, plus I think it might confuse ReadBuffer's logic for data-past-EOF later on.) So put the check into ReadBuffer. Per report from Christoph Berg. It's been like this forever, so back-patch to all supported branches. Discussion: https://postgr.es/m/YTn1iTkUYBZfcODk@msg.credativ.de
* Fix issue with WAL archiving in standby.Fujii Masao2021-09-09
| | | | | | | | | | | | | | | | | | | | | | | Previously, walreceiver always closed the currently-opened WAL segment and created its archive notification file, after it finished writing the current segment up and received any WAL data that should be written into the next segment. If walreceiver exited just before any WAL data in the next segment arrived at standby, it did not create the archive notification file of the current segment even though that's known completed. This behavior could cause WAL archiving of the segment to be delayed until subsequent restartpoints or checkpoints created its notification file. To fix the issue, this commit changes walreceiver so that it creates an archive notification file of a current WAL segment immediately if that's known completed before receiving next WAL data. Back-patch to all supported branches. Reported-by: Kyotaro Horiguchi Author: Fujii Masao Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/20200630.165503.1465894182551545886.horikyota.ntt@gmail.com
* Avoid useless malloc/free traffic around getFormattedTypeName().Tom Lane2021-09-08
| | | | | | | | | | | | | | Coverity complained that one caller of getFormattedTypeName() failed to free the returned string. Which is true, but rather than fixing that one, let's get rid of this tedious and error-prone requirement. Now that getFormattedTypeName() caches its result, strdup'ing that result and expecting the caller to free it accomplishes little except to waste cycles. We do create a leak in the case where getTypes didn't make a TypeInfo for the type, but that basically shouldn't ever happen. Back-patch, as commit 6c450a861 was. This isn't a particularly interesting bug fix, but the API change seems like a hazard for future back-patching activity if we don't back-patch it.
* Fix misleading comments about TOAST access macros.Tom Lane2021-09-08
| | | | | | | Seems to have been my error in commit aeb1631ed. Noted by Christoph Berg. Discussion: https://postgr.es/m/YTeLipdnSOg4NNcI@msg.df7cb.de
* Fix rewriter to set hasModifyingCTE correctly on rewritten queries.Tom Lane2021-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we copy data-modifying CTEs from the original query to a replacement query (from a DO INSTEAD rule), we must set hasModifyingCTE properly in the replacement query. Failure to do this can cause various unpleasantness, such as unsafe usage of parallel plans. The code also neglected to propagate hasRecursive, though that's only cosmetic at the moment. A difficulty arises if the rule action is an INSERT...SELECT. We attach the original query's RTEs and CTEs to the sub-SELECT Query, but data-modifying CTEs are only allowed to appear in the topmost Query. For the moment, throw an error in such cases. It would probably be possible to avoid this error by attaching the CTEs to the top INSERT Query instead; but that would require a bunch of new code to adjust ctelevelsup references. Given the narrowness of the use-case, and the need to back-patch this fix, it does not seem worth the trouble for now. We can revisit this if we get field complaints. Per report from Greg Nancarrow. Back-patch to all supported branches. (The test case added here does not fail before v10, but there are plenty of places checking top-level hasModifyingCTE in 9.6, so I have no doubt that this code change is necessary there too.) Greg Nancarrow and Tom Lane Discussion: https://postgr.es/m/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.com Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
* Disable anonymous record hash support except in special casesPeter Eisentraut2021-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 01e658fa74 added hash support for row types. This also added support for hashing anonymous record types, using the same approach that the type cache uses for comparison support for record types: It just reports that it works, but it might fail at run time if a component type doesn't actually support the operation. We get away with that for comparison because most types support that. But some types don't support hashing, so the current state can result in failures at run time where the planner chooses hashing over sorting, whereas that previously worked if only sorting was an option. We do, however, want the record hashing support for path tracking in recursive unions, and the SEARCH and CYCLE clauses built on that. In that case, hashing is the only plan option. So enable that, this commit implements the following approach: The type cache does not report that hashing is available for the record type. This undoes that part of 01e658fa74. Instead, callers that require hashing no matter what can override that result themselves. This patch only touches the callers to make the aforementioned recursive query cases work, namely the parse analysis of unions, as well as the hash_array() function. Reported-by: Sait Talha Nisanci <sait.nisanci@microsoft.com> Bug: #17158 Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
* Invalidate relcache for publications defined for all tables.Amit Kapila2021-09-08
| | | | | | | | | | | | | | | Updates/Deletes on a relation were allowed even without replica identity after we define the publication for all tables. This would later lead to an error on subscribers. The reason was that for such publications we were not invalidating the relcache and the publication information for relations was not getting rebuilt. Similarly, we were not invalidating the relcache after dropping of such publications which will prohibit Updates/Deletes without replica identity even without any publication. Author: Vignesh C and Hou Zhijie Reviewed-by: Hou Zhijie, Kyotaro Horiguchi, Amit Kapila Backpatch-through: 10, where it was introduced Discussion: https://postgr.es/m/CALDaNm0pF6zeWqCA8TCe2sDuwFAy8fCqba=nHampCKag-qLixg@mail.gmail.com
* Consistently use read-only instead of "read only"Magnus Hagander2021-09-07
| | | | | | | | This affects one message and some documentation that used the format "read only", unlike everything else that used read-only. Backpatch-through: 14 Discussion: https://postgr.es/m/CABUevExuxKwn0YM3+wdSeQSvK6CRrJ-hewocGVX3R4-xVX4eMw@mail.gmail.com
* Finish reverting 3eda9fc09fd6b9a1aec2d0113c633c69c3214b4d.Tom Lane2021-09-07
| | | | | | | | Commit 67c33a114 should have set v14's catversion back to what it was before 3eda9fc09, to avoid forcing a useless pg_upgrade cycle on users of 14beta3. Do that now. Discussion: https://postgr.es/m/2598498.1630702074@sss.pgh.pa.us
* Fix missing words in comment.Heikki Linnakangas2021-09-07
| | | | | | | Introduced by commit c3928b467a, backpatch to v14 like that one. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA+HiwqFQgNLS6VGntMcuJV6erBFV425xA6wBVnY=41GK4zC0Bw@mail.gmail.com
* AIX: Fix missing libpq symbols by respecting SHLIB_EXPORTS.Noah Misch2021-09-06
| | | | | | | | | | | | | | | | | | | We make each AIX shared library export all globals found in .o files that originate in the library. That doesn't include symbols acquired by -lpgcommon_shlib. That is good on average, but it became a problem for libpq when commit e6afa8918c461c1dd80c5063a950518fa4e950cd moved five official libpq API symbols into src/common. Fix this by implementing the SHLIB_EXPORTS mechanism for AIX, so affected libraries export the same symbols that they export on Linux. This reintroduces symbols pg_encoding_to_char, pg_utf_mblen, pg_char_to_encoding, pg_valid_server_encoding, and pg_valid_server_encoding_id. Back-patch to v13, where the aforementioned commit first appeared. While a minor release is usually the wrong time to add or remove symbol exports in libpq or libecpg, we should expect users to want each documented symbol. Tony Reix Discussion: https://postgr.es/m/PR3PR02MB6396742E2FC3E77D37A920BC86C79@PR3PR02MB6396.eurprd02.prod.outlook.com
* Fix bogus timetz_zone() results for DYNTZ abbreviations.Tom Lane2021-09-06
| | | | | | | | | | | | | timetz_zone() delivered completely wrong answers if the zone was specified by a dynamic TZ abbreviation, because it failed to account for the difference between the POSIX conventions for field values in struct pg_tm and the conventions used in PG-specific datetime code. As a stopgap fix, just adjust the tm_year and tm_mon fields to match PG conventions. This is fixed in a different way in HEAD (388e71af8) but I don't want to back-patch the change of reference point. Discussion: https://postgr.es/m/CAJ7c6TOMG8zSNEZtCn5SPe+cCk3Lfxb71ZaQwT2F4T7PJ_t=KA@mail.gmail.com
* Fix pkg-config files for static linkingPeter Eisentraut2021-09-06
| | | | | | | | | | Since ea53100d5 (PostgreSQL 12), the shipped pkg-config files have been broken for statically linking libpq because libpgcommon and libpgport are missing. This patch adds those two missing private dependencies (in a non-hardcoded way). Reported-by: Filip Gospodinov <f@gospodinov.ch> Discussion: https://www.postgresql.org/message-id/flat/c7108bde-e051-11d5-a234-99beec01ce2a@gospodinov.ch
* Further portability tweaks for float4/float8 hash functions.Tom Lane2021-09-04
| | | | | | | | | | Attempting to make hashfloat4() look as much as possible like hashfloat8(), I'd figured I could replace NaNs with get_float4_nan() before widening to float8. However, results from protosciurus and topminnow show that on some platforms that produces a different bit-pattern from get_float8_nan(), breaking the intent of ce773f230. Rearrange so that we use the result of get_float8_nan() for all NaN cases. As before, back-patch.
* Minor improvements for psql help output.Tom Lane2021-09-04
| | | | | | | | | | | Fix alphabetization of the output of "\?", and improve one description. Update PageOutput counts where needed, fixing breakage from previous patches. Haiying Tang (PageOutput fixes by me) Discussion: https://postgr.es/m/OS0PR01MB61136018064660F095CB57A8FB129@OS0PR01MB6113.jpnprd01.prod.outlook.com
* Revert "Avoid creating archive status ".ready" files too early"Alvaro Herrera2021-09-04
| | | | | | | | | | This reverts commit 515e3d84a0b5 and equivalent commits in back branches. This solution to the problem has a number of problems, so we'll try again with a different approach. Per note from Andres Freund Discussion: https://postgr.es/m/20210831042949.52eqp5xwbxgrfank@alap3.anarazel.de
* Remove arbitrary MAXPGPATH limit on command lengths in pg_ctl.Tom Lane2021-09-03
| | | | | | | | | | | | | | | | | | Replace fixed-length command buffers with psprintf() calls. We didn't have anything as convenient as psprintf() when this code was written, but now that we do, there's little reason for the limitation to stand. Removing it eliminates some corner cases where (for example) starting the postmaster with a whole lot of options fails. Most individual file names that pg_ctl deals with are still restricted to MAXPGPATH, but we've seldom had complaints about that limitation so long as it only applies to one filename. Back-patch to all supported branches. Phil Krylov Discussion: https://postgr.es/m/567e199c6b97ee19deee600311515b86@krylov.eu
* Disallow creating an ICU collation if the DB encoding won't support it.Tom Lane2021-09-03
| | | | | | | | | | | | | | | | Previously this was allowed, but the collation effectively vanished into the ether because of the way lookup_collation() works: you could not use the collation, nor even drop it. Seems better to give an error up front than to leave the user wondering why it doesn't work. (Because this test is in DefineCollation not CreateCollation, it does not prevent pg_import_system_collations from creating ICU collations, regardless of the initially-chosen encoding.) Per bug #17170 from Andrew Bille. Back-patch to v10 where ICU support was added. Discussion: https://postgr.es/m/17170-95845cf3f0a9c36d@postgresql.org
* Set the volatility of the timestamptz version of date_bin() back to immutableJohn Naylor2021-09-03
| | | | | | | | | | | 543f36b43d was too hasty in thinking that the volatility of date_bin() had to match date_trunc(), since only the latter references session_timezone. Bump catversion Per feedback from Aleksander Alekseev Backpatch to v14, as the former commit was
* Fix portability issue in tests from commit ce773f230.Tom Lane2021-09-03
| | | | | | | | | Modern POSIX seems to require strtod() to accept "-NaN", but there's nothing about NaN in SUSv2, and some of our oldest buildfarm members don't like it. Let's try writing it as -'NaN' instead; that seems to produce the same result, at least on Intel hardware. Per buildfarm.
* In count_usable_fds(), duplicate stderr not stdin.Tom Lane2021-09-02
| | | | | | | | | | | | | | | | | | | | We had a complaint that the postmaster fails to start if the invoking program closes stdin. That happens because count_usable_fds expects to be able to dup(0), and if it can't, we conclude there are no free FDs and go belly-up. So far as I can find, though, there is no other place in the server that touches stdin, and it's not unreasonable to expect that a daemon wouldn't use that file. As a simple improvement, let's dup FD 2 (stderr) instead. Unlike stdin, it *is* reasonable for us to expect that stderr be open; even if we are configured not to touch it, common libraries such as libc might try to write error messages there. Per gripe from Mario Emmenlauer. Given the lack of previous complaints, I'm not excited about pushing this into stable branches, but it seems OK to squeeze it into v14. Discussion: https://postgr.es/m/48bafc63-c30f-3962-2ded-f2e985d93e86@emmenlauer.de
* Fix float4/float8 hash functions to produce uniform results for NaNs.Tom Lane2021-09-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IEEE 754 standard allows a wide variety of bit patterns for NaNs, of which at least two ("NaN" and "-NaN") are pretty easy to produce from SQL on most machines. This is problematic because our btree comparison functions deem all NaNs to be equal, but our float hash functions know nothing about NaNs and will happily produce varying hash codes for them. That causes unexpected results from queries that hash a column containing different NaN values. It could also produce unexpected lookup failures when using a hash index on a float column, i.e. "WHERE x = 'NaN'" will not find all the rows it should. To fix, special-case NaN in the float hash functions, not too much unlike the existing special case that forces zero and minus zero to hash the same. I arranged for the most vanilla sort of NaN (that coming from the C99 NAN constant) to still have the same hash code as before, to reduce the risk to existing hash indexes. I dithered about whether to back-patch this into stable branches, but ultimately decided to do so. It's a clear improvement for queries that hash internally. If there is anybody who has -NaN in a hash index, they'd be well advised to re-index after applying this patch ... but the misbehavior if they don't will not be much worse than the misbehavior they had before. Per bug #17172 from Ma Liangzhu. Discussion: https://postgr.es/m/17172-7505bea9e04e230f@postgresql.org
* Identify simple column references in extended statisticsTomas Vondra2021-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | Until now, when defining extended statistics, everything except a plain column reference was treated as complex expression. So for example "a" was a column reference, but "(a)" would be an expression. In most cases this does not matter much, but there were a couple strange consequences. For example CREATE STATISTICS s ON a FROM t; would fail, because extended stats require at least two columns. But CREATE STATISTICS s ON (a) FROM t; would succeed, because that requirement does not apply to expressions. Moreover, that statistics object is useless - the optimizer will always use the regular statistics collected for attribute "a". So do a bit more work to identify those expressions referencing a single column, and translate them to a simple column reference. Backpatch to 14, where support for extended statistics on expressions was introduced. Reported-by: Justin Pryzby Backpatch-through: 14 Discussion: https://postgr.es/m/20210816013255.GS10479%40telsasoft.com
* pgbench: Fix bug in measurement of disconnection delays.Fujii Masao2021-09-01
| | | | | | | | | | | | | | | | | | | When -C/--connect option is specified, pgbench establishes and closes the connection for each transaction. In this case pgbench needs to measure the times taken for all those connections and disconnections, to include the average connection time in the benchmark result. But previously pgbench could not measure those disconnection delays. To fix the bug, this commit makes pgbench measure the disconnection delays whenever the connection is closed at the end of transaction, if -C/--connect option is specified. Back-patch to v14. Per discussion, we concluded not to back-patch to v13 or before because changing that behavior in stable branches would surprise users rather than providing benefits. Author: Yugo Nagata Reviewed-by: Fabien COELHO, Tatsuo Ishii, Asif Rehman, Fujii Masao Discussion: https://postgr.es/m/20210614151155.a393bc7d8fed183e38c9f52a@sraoss.co.jp
* Fix the random test failure in 001_rep_changes.Amit Kapila2021-09-01
| | | | | | | | | | | | | | | | | | The check to test whether the subscription workers were restarting after a change in the subscription was failing. The reason was that the test was assuming the walsender started before it reaches the 'streaming' state and the walsender was exiting due to an error before that. Now, the walsender was erroring out before reaching the 'streaming' state because it tries to acquire the slot before the previous walsender has exited. In passing, improve the die messages so that it is easier to investigate the failures in the future if any. Reported-by: Michael Paquier, as per buildfarm Author: Ajin Cherian Reviewed-by: Masahiko Sawada, Amit Kapila Backpatch-through: 10, where this test was introduced Discussion: https://postgr.es/m/YRnhFxa9bo73wfpV@paquier.xyz
* VACUUM VERBOSE: Don't report "pages removed".Peter Geoghegan2021-08-31
| | | | | | | | | | | | | | It doesn't make any sense to report this information, since VACUUM VERBOSE reports on heap relation truncation directly. This was an oversight in commit 7ab96cf6, which made VACUUM VERBOSE output a little more consistent with nearby autovacuum-specific log output. Adjust comments that describe how this is supposed to work in passing. Also bring truncation-related VACUUM VERBOSE output in line with the convention established for VACUUM VERBOSE output by commit f4f4a649. Author: Peter Geoghegan <pg@bowt.ie> Backpatch: 14-, where VACUUM VERBOSE's output changed.
* Don't print extra parens around expressions in extended statsTomas Vondra2021-09-01
| | | | | | | | | | | | | | The code printing expressions for extended statistics doubled the parens, producing results like ((a+1)), which is unnecessary and not consistent with how we print expressions elsewhere. Fixed by tweaking the code to produce just a single set of parens. Reported by Mark Dilger, fix by me. Backpatch to 14, where support for extended statistics on expressions was added. Reported-by: Mark Dilger Discussion: https://postgr.es/m/20210122040101.GF27167%40telsasoft.com
* Mark the timestamptz variant of date_bin() as stableJohn Naylor2021-08-31
| | | | | | | | | | Previously, it was immutable by lack of marking. This is not correct, since the time zone could change. Bump catversion Discussion: https://www.postgresql.org/message-id/CAFBsxsG2UHk8mOWL0tca%3D_cg%2B_oA5mVRNLhDF0TBw980iOg5NQ%40mail.gmail.com Backpatch to v14, when this function came in
* In pg_dump, avoid doing per-table queries for RLS policies.Tom Lane2021-08-31
| | | | | | | | | | | | | | | | For no particularly good reason, getPolicies() queried pg_policy separately for each table. We can collect all the policies in a single query instead, and attach them to the correct TableInfo objects using findTableByOid() lookups. On the regression database, this reduces the number of queries substantially, and provides a visible savings even when running against a local server. Per complaint from Hubert Depesz Lubaczewski. Since this is such a simple fix and can have a visible performance benefit, back-patch to all supported branches. Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
* Cache the results of format_type() queries in pg_dump.Tom Lane2021-08-31
| | | | | | | | | | | | | | | | | | | There's long been a "TODO: there might be some value in caching the results" annotation on pg_dump's getFormattedTypeName function; but we hadn't gotten around to checking what it was costing us to repetitively look up type names. It turns out that when dumping the current regression database, about 10% of the total number of queries issued are duplicative format_type() queries. However, Hubert Depesz Lubaczewski reported a not-unusual case where these account for over half of the queries issued by pg_dump. Individually these queries aren't expensive, but when network lag is a factor, they add up to a problem. We can very easily add some caching to getFormattedTypeName to solve it. Since this is such a simple fix and can have a visible performance benefit, back-patch to all supported branches. Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
* Rename the role in stats_ext to have regress_ prefixTomas Vondra2021-08-31
| | | | | | | | | | Commit 5be8ce82e8 added a new role to the stats_ext regression suite, but the role name did not start with regress_ causing failures when running with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS. Fixed by renaming the role to start with the expected regress_ prefix. Backpatch-through: 10, same as the new regression test Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
* Fix lookup error in extended stats ownership checkTomas Vondra2021-08-31
| | | | | | | | | | | | | | When an ownership check on extended statistics object failed, the code was calling aclcheck_error_type to report the failure, which is clearly wrong, resulting in cache lookup errors. Fix by calling aclcheck_error. This issue exists since the introduction of extended statistics, so backpatch all the way back to PostgreSQL 10. It went unnoticed because there were no tests triggering the error, so add one. Reported-by: Mark Dilger Backpatch-through: 10, where extended stats were introduced Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
* Fix missed lock acquisition while inlining new-style SQL functions.Tom Lane2021-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When starting to use a query parsetree loaded from the catalogs, we must begin by applying AcquireRewriteLocks(), to obtain the same relation locks that the parser would have gotten if the query were entered interactively, and to do some other cleanup such as dealing with later-dropped columns. New-style SQL functions are just as subject to this rule as other stored parsetrees; however, of the places dealing with such functions, only init_sql_fcache had gotten the memo. In particular, if we successfully inlined a new-style set-returning SQL function that contained any relation references, we'd either get an assertion failure or attempt to use those relation(s) sans locks. I also added AcquireRewriteLocks calls to fmgr_sql_validator and print_function_sqlbody. Desultory experiments didn't demonstrate any failures in those, but I suspect that I just didn't try hard enough. Certainly we don't expect nearby code paths to operate without locks. On the same logic of it-ought-to-have-the-same-effects-as-the-old-code, call pg_rewrite_query() in fmgr_sql_validator, too. It's possible that neither code path there needs to bother with rewriting, but doing the analysis to prove that is beyond my goals for today. Per bug #17161 from Alexander Lakhin. Discussion: https://postgr.es/m/17161-048a1cdff8422800@postgresql.org
* Report tuple address in data-corruption error messageAlvaro Herrera2021-08-30
| | | | | | | | | | | Most data-corruption reports mention the location of the problem, but this one failed to. Add it. Backpatch all the way back. In 12 and older, also assign the ERRCODE_DATA_CORRUPTED error code as was done in commit fd6ec93bf890 for 13 and later. Discussion: https://postgr.es/m/202108191637.oqyzrdtnheir@alvherre.pgsql
* psql: Fix name quoting on extended statisticsAlvaro Herrera2021-08-30
| | | | | | | | | | | | | | | | Per our message style guidelines, for human consumption we quote qualified names as a whole rather than each part separately; but commits bc085205c8a4 introduced a deviation for extended statistics and a4d75c86bf15 copied it. I don't agree with this policy applying to names shown by psql, but that's a poor reason to deviate from the practice only in two obscure corners, so make said corners use the same style as everywhere else. Backpatch to 14. The first of these is older, but I'm not sure we want to destabilize the psql output in the older branches for such a small thing. Discussion: https://postgr.es/m/20210828181618.GS26465@telsasoft.com
* pgbench: Avoid unnecessary measurement of connection delays.Fujii Masao2021-08-30
| | | | | | | | | | | | | | Commit 547f04e734 changed pgbench so that it used the measurement result of connection delays in its benchmark report only when -C/--connect option is specified. But previously those delays were unnecessarily measured even when that option is not specified. Which was a waste of cycles. This commit improves pgbench so that it avoids such unnecessary measurement. Back-patch to v14 where commit 547f04e734 first appeared. Author: Yugo Nagata Reviewed-by: Fabien COELHO, Asif Rehman, Fujii Masao Discussion: https://postgr.es/m/20210614151155.a393bc7d8fed183e38c9f52a@sraoss.co.jp
* Fix incorrect error code in StartupReplicationOrigin().Amit Kapila2021-08-30
| | | | | | | | | | ERRCODE_CONFIGURATION_LIMIT_EXCEEDED was used for checksum failure, use ERRCODE_DATA_CORRUPTED instead. Reported-by: Tatsuhito Kasahara Author: Tatsuhito Kasahara Backpatch-through: 9.6, where it was introduced Discussion: https://postgr.es/m/CAP0=ZVLHtYffs8SOWcFJWrBGoRzT9QQbk+_aP+E5AHLNXiOorA@mail.gmail.com
* Keep stats up to date for partitioned tablesAlvaro Herrera2021-08-28
| | | | | | | | | | | | | | | | | | | | In the long-going saga for analyze on partitioned tables, one thing I missed while reverting 0827e8af70f4 is the maintenance of analyze count and last analyze time for partitioned tables. This is a mostly trivial change that enables users assess the need for invoking manual ANALYZE on partitioned tables. This patch, posted by Justin and modified a bit by me (Álvaro), can be mostly traced back to Hosoya-san, though any problems introduced with the scissors are mine. Backpatch to 14, in line with 6f8127b73901. Co-authored-by: Yuzuko Hosoya <yuzukohosoya@gmail.com> Co-authored-by: Justin Pryzby <pryzby@telsasoft.com> Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20210816222810.GE10479@telsasoft.com
* psql \dX: reference regclass with "pg_catalog." prefixAlvaro Herrera2021-08-28
| | | | | | | | | | | Déjà vu of commit fc40ba1296a7, for another backslash command. Strictly speaking this isn't a bug, but since all references to catalog objects are schema-qualified, we might as well be consistent. The omission first appeared in commit ad600bba0422 and replicated in a4d75c86bf15; backpatch to 14. Author: Justin Pryzby <pryzbyj@telsasoft.com> Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
* psql \dP: reference regclass with "pg_catalog." prefixAlvaro Herrera2021-08-28
| | | | | | | | | Strictly speaking this isn't a bug, but since all references to catalog objects are schema-qualified, we might as well be consistent. The omission first appeared in commit 1c5d9270e339, so backpatch to 12. Author: Justin Pryzby <pryzbyj@telsasoft.com> Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
* Fix data loss in wal_level=minimal crash recovery of CREATE TABLESPACE.Noah Misch2021-08-27
| | | | | | | | | | | | | | | | | | | | | | | | | If the system crashed between CREATE TABLESPACE and the next checkpoint, the result could be some files in the tablespace unexpectedly containing no rows. Affected files would be those for which the system did not write WAL; see the wal_skip_threshold documentation. Before v13, a different set of conditions governed the writing of WAL; see v12's <sect2 id="populate-pitr">. (The v12 conditions were broader in some ways and narrower in others.) Users may want to audit non-default tablespaces for unexpected short files. The bug could have truncated an index without affecting the associated table, and reindexing the index would fix that particular problem. This fixes the bug by making create_tablespace_directories() more like TablespaceCreateDbspace(). create_tablespace_directories() was recursively removing tablespace contents, reasoning that WAL redo would recreate everything removed that way. That assumption holds for other wal_level values. Under wal_level=minimal, the old approach could delete files for which no other copy existed. Back-patch to 9.6 (all supported versions). Reviewed by Robert Haas and Prabhat Sahu. Reported by Robert Haas. Discussion: https://postgr.es/m/CA+TgmoaLO9ncuwvr2nN-J4VEP5XyAcy=zKiHxQzBbFRxxGxm0w@mail.gmail.com
* Count SP-GiST index scans in pg_stat statistics.Tom Lane2021-08-27
| | | | | | | | | | | | | | | | | | | Somehow, spgist overlooked the need to call pgstat_count_index_scan(). Hence, pg_stat_all_indexes.idx_scan and equivalent columns never became nonzero for an SP-GiST index, although the related per-tuple counters worked fine. This fix works a bit differently from other index AMs, in that the counter increment occurs in spgrescan not spggettuple/spggetbitmap. It looks like this won't make the user-visible semantics noticeably different, so I won't go to the trouble of introducing an is-this- the-first-call flag just to make the counter bumps happen in the same places. Per bug #17163 from Christian Quest. Back-patch to all supported versions. Discussion: https://postgr.es/m/17163-b8c5cc88322a5e92@postgresql.org
* Use maintenance_io_concurrency for ANALYZE prefetchStephen Frost2021-08-27
| | | | | | | | | | | | | When prefetching pages for ANALYZE, we should be using maintenance_io_concurrenty (by calling get_tablespace_maintenance_io_concurrency(), not get_tablespace_io_concurrency()). ANALYZE prefetching was introduced in c6fc50c, so back-patch to 14. Backpatch-through: 14 Reported-By: Egor Rogov Discussion: https://postgr.es/m/9beada99-34ce-8c95-fadb-451768d08c64%40postgrespro.ru
* track_io_timing logging: Don't special case 0 ms.Peter Geoghegan2021-08-27
| | | | | | | | | | | | | | | Adjust track_io_timing related logging code added by commit 94d13d474d. Make it consistent with other nearby autovacuum and autoanalyze logging code by removing logic that suppressed zero millisecond outputs. log_autovacuum_min_duration log output now reliably shows "read:" and "write:" millisecond-based values in its report (when track_io_timing is enabled). Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Stephen Frost <sfrost@snowman.net> Discussion: https://postgr.es/m/CAH2-WznW0FNxSVQMSRazAMYNfZ6DR_gr5WE78hc6E1CBkkJpzw@mail.gmail.com Backpatch: 14-, where the track_io_timing logging was introduced.
* Reorder log_autovacuum_min_duration log output.Peter Geoghegan2021-08-27
| | | | | | | | | | This order seems more natural. It starts with details that are particular to heap and index data structures, and ends with system-level costs incurred during the autovacuum worker's VACUUM/ANALYZE operation. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzkzxK6ahA9xxsOftRtBX_R0swuHZsvo4QUbak1Bz7hb7Q@mail.gmail.com Backpatch: 14-, which enhanced the log output in various ways.
* Handle interaction of regexp's makesearch and MATCHALL more honestly.Tom Lane2021-08-27
| | | | | | | | | | | | | | | Second thoughts about commit 824bf7190: we apply makesearch() to an NFA after having determined whether it is a MATCHALL pattern. Prepending ".*" doesn't make it non-MATCHALL, but it does change the maximum possible match length, and makesearch() failed to update that. This has no ill effects given the stylized usage of search NFAs, but it seems like it's better to keep the data structure consistent. In particular, fixing this allows more honest handling of the MATCHALL check in matchuntil(): we can now assert that maxmatchall is infinity, instead of lamely assuming that it should act that way. In passing, improve the code in dump[c]nfa so that infinite maxmatchall is printed as "inf" not a magic number.
* Remove redundant test.Tom Lane2021-08-25
| | | | | | | | | | The condition "context_start < context_end" is strictly weaker than "context_end - context_start >= 50", so we don't need both. Oversight in commit ffd3944ab, noted by tanghy.fnst. In passing, line-wrap a nearby test to make it more readable. Discussion: https://postgr.es/m/OS0PR01MB61137C4054774F44E3A9DC89FBC69@OS0PR01MB6113.jpnprd01.prod.outlook.com
* Fix broken snapshot handling in parallel workers.Robert Haas2021-08-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | Pengchengliu reported an assertion failure in a parallel woker while performing a parallel scan using an overflowed snapshot. The proximate cause is that TransactionXmin was set to an incorrect value. The underlying cause is incorrect snapshot handling in parallel.c. In particular, InitializeParallelDSM() was unconditionally calling GetTransactionSnapshot(), because I (rhaas) mistakenly thought that was always retrieving an existing snapshot whereas, at isolation levels less than REPEATABLE READ, it's actually taking a new one. So instead do this only at higher isolation levels where there actually is a single snapshot for the whole transaction. By itself, this is not a sufficient fix, because we still need to guarantee that TransactionXmin gets set properly in the workers. The easiest way to do that seems to be to install the leader's active snapshot as the transaction snapshot if the leader did not serialize a transaction snapshot. This doesn't affect the results of future GetTrasnactionSnapshot() calls since those have to take a new snapshot anyway; what we care about is the side effect of setting TransactionXmin. Report by Pengchengliu. Patch by Greg Nancarrow, except for some comment text which I supplied. Discussion: https://postgr.es/m/002f01d748ac$eaa781a0$bff684e0$@tju.edu.cn