aboutsummaryrefslogtreecommitdiff
path: root/contrib
Commit message (Collapse)AuthorAge
* doc: Spell checkingPeter Eisentraut2020-07-05
|
* Read until EOF vice stat-reported size in read_binary_fileJoe Conway2020-07-04
| | | | | | | | | | | | | | | | | | read_binary_file(), used by SQL functions pg_read_file() and friends, uses stat to determine file length to read, when not passed an explicit length as an argument. This is problematic, for example, if the file being read is a virtual file with a stat-reported length of zero. Arrange to read until EOF, or StringInfo data string lenth limit, is reached instead. Original complaint and patch by me, with significant review, corrections, advice, and code optimizations by Tom Lane. Backpatched to v11. Prior to that only paths relative to the data and log dirs were allowed for files, so no "zero length" files were reachable anyway. Reviewed-By: Tom Lane Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com Backpatch-through: 11
* Change default of pg_stat_statements.track_planning to off.Fujii Masao2020-07-03
| | | | | | | | | | | | | | | | | | | | | Since v13 pg_stat_statements is allowed to track the planning time of statements when track_planning option is enabled. Its default was on. But this feature could cause more terrible spinlock contentions in pg_stat_statements. As a result of this, Robins Tharakan reported that v13 beta1 showed ~45% performance drop at high DB connection counts (when compared with v12.3) during fully-cached SELECT-only test using pgbench. To avoid this performance regression by the default setting, this commit changes default of pg_stat_statements.track_planning to off. Back-patch to v13 where pg_stat_statements.track_planning was introduced. Reported-by: Robins Tharakan Author: Fujii Masao Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/2895b53b033c47ccb22972b589050dd9@EX13D05UWC001.ant.amazon.com
* pgstattuple: Have pgstattuple_approx accept TOAST tablesPeter Eisentraut2020-06-30
| | | | | | | | | | | | TOAST tables have a visibility map and a free space map, so they can be supported by pgstattuple_approx just fine. Add test cases to show how various pgstattuple functions accept TOAST tables. Also add similar tests to pg_visibility, which already accepted TOAST tables correctly but had no test coverage for them. Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Discussion: https://www.postgresql.org/message-id/flat/27c4496a-02b9-dc87-8f6f-bddbef54e0fe@2ndquadrant.com
* Avoid using %c printf format for potentially non-ASCII characters.Tom Lane2020-06-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since %c only passes a C "char" to printf, it's incapable of dealing with multibyte characters. Passing just the first byte of such a character leads to an output string that is visibly not correctly encoded, resulting in undesirable behavior such as encoding conversion failures while sending error messages to clients. We've lived with this issue for a long time because it was inconvenient to avoid in a portable fashion. However, now that we always use our own snprintf code, it's reasonable to use the %.*s format to print just one possibly-multibyte character in a string. (We previously avoided that obvious-looking answer in order to work around glibc's bug #6530, cf commits 54cd4f045 and ed437e2b2.) Hence, run around and fix a bunch of places that used %c to report a character found in a user-supplied string. For simplicity, I did not touch places that were emitting non-user-facing debug messages, or reporting catalog data that should always be ASCII. (It's also unclear how useful this approach could be in frontend code, where it's less certain that we know what encoding we're dealing with.) In passing, improve a couple of poorly-written error messages in pageinspect/heapfuncs.c. This is a longstanding issue, but I'm hesitant to back-patch because of the impact on translatable message strings. In any case this fix would not work reliably before v12. Tom Lane and Quan Zongliang Discussion: https://postgr.es/m/a120087c-4c88-d9d4-1ec5-808d7a7f133d@gmail.com
* Add current substring regular expression syntaxPeter Eisentraut2020-06-29
| | | | | | | | | | | | | | | | | | | SQL:1999 had syntax SUBSTRING(text FROM pattern FOR escapechar) but this was replaced in SQL:2003 by the more clear SUBSTRING(text SIMILAR pattern ESCAPE escapechar) but this was never implemented in PostgreSQL. This patch adds that new syntax as an alternative in the parser, and updates documentation and tests to indicate that this is the preferred alternative now. Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Vik Fearing <vik@postgresfriends.org> Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr> Discussion: https://www.postgresql.org/message-id/flat/a15db31c-d0f8-8ce0-9039-578a31758adb%402ndquadrant.com
* Replace superuser check by ACLs for replication origin functionsMichael Paquier2020-06-14
| | | | | | | | | | | This patch removes the hardcoded check for superuser privileges when executing replication origin functions. Instead, execution is revoked from public, meaning that those functions can be executed by a superuser and that access to them can be granted. Author: Martín Marqués Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Masahiko Sawada Discussion: https:/postgr.es/m/CAPdiE1xJMZOKQL3dgHMUrPqysZkgwzSMXETfKkHYnBAB7-0VRQ@mail.gmail.com
* Spelling adjustmentsPeter Eisentraut2020-06-09
| | | | similar to 0fd2a79a637f9f96b9830524823df0454e962f96
* Use query collation, not column's collation, while examining statistics.Tom Lane2020-06-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5e0928005 changed the planner so that, instead of blindly using DEFAULT_COLLATION_OID when invoking operators for selectivity estimation, it would use the collation of the column whose statistics we're considering. This was recognized as still being not quite the right thing, but it seemed like a good incremental improvement. However, shortly thereafter we introduced nondeterministic collations, and that creates cases where operators can fail if they're passed the wrong collation. We don't want planning to fail in cases where the query itself would work, so this means that we *must* use the query's collation when invoking operators for estimation purposes. The only real problem this creates is in ineq_histogram_selectivity, where the binary search might produce a garbage answer if we perform comparisons using a different collation than the column's histogram is ordered with. However, when the query's collation is significantly different from the column's default collation, the estimate we previously generated would be pretty irrelevant anyway; so it's not clear that this will result in noticeably worse estimates in practice. (A follow-on patch will improve this situation in HEAD, but it seems too invasive for back-patch.) The patch requires changing the signatures of mcv_selectivity and allied functions, which are exported and very possibly are used by extensions. In HEAD, I just did that, but an API/ABI break of this sort isn't acceptable in stable branches. Therefore, in v12 the patch introduces "mcv_selectivity_ext" and so on, with signatures matching HEAD, and makes the old functions into wrappers that assume DEFAULT_COLLATION_OID should be used. That does not match the prior behavior, but it should avoid risk of failure in most cases. (In practice, I think most extension datatypes aren't collation-aware, so the change probably doesn't matter to them.) Per report from James Lucas. Back-patch to v12 where the problem was introduced. Discussion: https://postgr.es/m/CAAFmbbOvfi=wMM=3qRsPunBSLb8BFREno2oOzSBS=mzfLPKABw@mail.gmail.com
* Use CP_SMALL_TLIST for hash aggregateTomas Vondra2020-05-31
| | | | | | | | | | | | | | | | | Commit 1f39bce021 added disk-based hash aggregation, which may spill incoming tuples to disk. It however did not request projection to make the tuples as narrow as possible, which may mean having to spill much more data than necessary (increasing I/O, pushing other stuff from page cache, etc.). This adds CP_SMALL_TLIST in places that may use hash aggregation - we do that only for AGG_HASHED. It's unnecessary for AGG_SORTED, because that either uses explicit Sort (which already does projection) or pre-sorted input (which does not need spilling to disk). Author: Tomas Vondra Reviewed-by: Jeff Davis Discussion: https://postgr.es/m/20200519151202.u2p2gpiawoaznsv2%40development
* Initialize dblink remoteConn struct in all casesJoe Conway2020-05-28
| | | | | | | | | | | Two of the members of rconn were left uninitialized. When dblink_open() is called without an outer transaction it handles the initialization for us, but with an outer transaction it does not. Arrange for initialization in all cases. Backpatch to all supported versions. Reported-by: Alexander Lakhin Discussion: https://www.postgresql.org/message-id/flat/9bd0744f-5f04-c778-c5b3-809efe9c30c7%40joeconway.com#c545909a41664991aca60c4d70a10ce7
* Clear some style deviations.Noah Misch2020-05-21
|
* Initial pgindent and pgperltidy run for v13.Tom Lane2020-05-14
| | | | | | | | | | | Includes some manual cleanup of places that pgindent messed up, most of which weren't per project style anyway. Notably, it seems some people didn't absorb the style rules of commit c9d297751, because there were a bunch of new occurrences of function calls with a newline just after the left paren, all with faulty expectations about how the rest of the call would get indented.
* Fix amcheck for page checks concurrent to replay of btree page deletionAlexander Korotkov2020-05-14
| | | | | | | | | | | | | | | | amcheck expects at least hikey to always exist on leaf page even if it is deleted page. But replica reinitializes page during replay of page deletion, causing deleted page to have no items. Thus, replay of page deletion can cause an error in concurrent amcheck run. This commit relaxes amcheck expectation making it tolerate deleted page with no items. Reported-by: Konstantin Knizhnik Discussion: https://postgr.es/m/CAPpHfdt_OTyQpXaPJcWzV2N-LNeNJseNB-K_A66qG%3DL518VTFw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Peter Geoghegan Backpatch-through: 11
* Dial back -Wimplicit-fallthrough to level 3Alvaro Herrera2020-05-13
| | | | | | | | | The additional pain from level 4 is excessive for the gain. Also revert all the source annotation changes to their original wordings, to avoid back-patching pain. Discussion: https://postgr.es/m/31166.1589378554@sss.pgh.pa.us
* Fix stragglerAlvaro Herrera2020-05-12
| | | | contrib/pgcrypto did contain an unedited fall-through marker after all.
* Propagate ALTER TABLE ... SET STORAGE to indexesPeter Eisentraut2020-05-08
| | | | | | | | | When creating a new index, the attstorage setting of the table column is copied to regular (non-expression) index columns. But a later ALTER TABLE ... SET STORAGE is not propagated to indexes, thus creating an inconsistent and undumpable state. Discussion: https://www.postgresql.org/message-id/flat/9765d72b-37c0-06f5-e349-2a580aafd989%402ndquadrant.com
* Remove obsolete amcheck comment.Peter Geoghegan2020-05-05
| | | | Oversight in commit d114cc53.
* Change the display of WAL usage statistics in Explain.Amit Kapila2020-05-05
| | | | | | | | | | | | | | | | In commit 33e05f89c5, we have added the option to display WAL usage statistics in Explain and auto_explain. The display format used two spaces between each field which is inconsistent with Buffer usage statistics which is using one space between each field. Change the format to make WAL usage statistics consistent with Buffer usage statistics. This commit also changed the usage of "full page writes" to "full page images" for WAL usage statistics to make it consistent with other parts of code and docs. Author: Julien Rouhaud, Amit Kapila Reviewed-by: Justin Pryzby, Kyotaro Horiguchi and Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
* Add posting list tuple amcheck test case.Peter Geoghegan2020-05-04
| | | | | | | Add a test case to contrib/amcheck that creates coverage of code paths that are used to verify posting list tuples (tuples created when deduplication merges together existing tuples to avoid a leaf page split).
* Get rid of trailing semicolons in C macro definitions.Tom Lane2020-05-01
| | | | | | | | | | | | | | | | | | | | | | | Writing a trailing semicolon in a macro is almost never the right thing, because you almost always want to write a semicolon after each macro call instead. (Even if there was some reason to prefer not to, pgindent would probably make a hash of code formatted that way; so within PG the rule should basically be "don't do it".) Thus, if we have a semi inside the macro, the compiler sees "something;;". Much of the time the extra empty statement is harmless, but it could lead to mysterious syntax errors at call sites. In perhaps an overabundance of neatnik-ism, let's run around and get rid of the excess semicolons whereever possible. The only thing worse than a mysterious syntax error is a mysterious syntax error that only happens in the back branches; therefore, backpatch these changes where relevant, which is most of them because most of these mistakes are old. (The lack of reported problems shows that this is largely a hypothetical issue, but still, it could bite us in some future patch.) John Naylor and Tom Lane Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
* Rename connection parameters to control min/max SSL protocol version in libpqMichael Paquier2020-04-30
| | | | | | | | | | | | | | The libpq parameters ssl{max|min}protocolversion are renamed to use underscores, to become ssl_{max|min}_protocol_version. The related environment variables still use the names introduced in commit ff8ca5f that added the feature. Per complaint from Peter Eisentraut (this was also mentioned by me in the original patch review but the issue got discarded). Author: Daniel Gustafsson Reviewed-by: Peter Eisentraut, Michael Paquier Discussion: https://postgr.es/m/b319e449-318d-e691-4997-1327e166fcc4@2ndquadrant.com
* Check slot->restart_lsn validity in a few more placesAlvaro Herrera2020-04-28
| | | | | | | | | | | | | Lack of these checks could cause visible misbehavior, including assertion failures. This was missed in commit c6550776394e, whereby restart_lsn becomes invalid when the size limit is exceeded. Also reword some existing error messages, and add errdetail(), so that the reported errors all match in spirit. Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20200408.093710.447591748588426656.horikyota.ntt@gmail.com
* Update Unicode data to Unicode 13.0.0 and CLDR 37Peter Eisentraut2020-04-24
|
* Fix cache reference leak in contrib/sepgsql.Tom Lane2020-04-16
| | | | | | | | | | | | | | fixup_whole_row_references() did the wrong thing with a dropped column, resulting in a commit-time warning about a cache reference leak. I (tgl) added a test case exercising this, but back-patched the test only as far as v10; the patch didn't apply cleanly to 9.6 and it didn't seem worth the trouble to adapt it. The bug is pretty old though, so apply the code change all the way back. Michael Luo, with cosmetic improvements by me Discussion: https://postgr.es/m/BYAPR08MB5606D1453D7F50E2AF4D2FD29AD80@BYAPR08MB5606.namprd08.prod.outlook.com
* Harmonize nbtree page split point code.Peter Geoghegan2020-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An nbtree split point can be thought of as a point between two adjoining tuples from an imaginary version of the page being split that includes the incoming/new item (in addition to the items that really are on the page). These adjoining tuples are called the lastleft and firstright tuples. The variables that represent split points contained a field called firstright, which is an offset number of the first data item from the original page that goes on the new right page. The corresponding tuple from origpage was usually the same thing as the actual firstright tuple, but not always: the firstright tuple is sometimes the new/incoming item instead. This situation seems unnecessarily confusing. Make things clearer by renaming the origpage offset returned by _bt_findsplitloc() to "firstrightoff". We now have a firstright tuple and a firstrightoff offset number which are comparable to the newitem/lastleft tuples and the newitemoff/lastleftoff offset numbers respectively. Also make sure that we are consistent about how we describe nbtree page split point state. Push the responsibility for dealing with pg_upgrade'd !heapkeyspace indexes down to lower level code, relieving _bt_split() from dealing with it directly. This means that we always have a palloc'd left page high key on the leaf level, no matter what. This enables simplifying some of the code (and code comments) within _bt_split(). Finally, restructure the page split code to make it clearer why suffix truncation (which only takes place during leaf page splits) is completely different to the first data item truncation that takes place during internal page splits. Tuples are marked as having fewer attributes stored in both cases, and the firstright tuple is truncated in both cases, so it's easy to imagine somebody missing the distinction.
* Use perl warnings pragma consistentlyAndrew Dunstan2020-04-13
| | | | | | | | | | We've had a mixture of the warnings pragma, the -w switch on the shebang line, and no warnings at all. This patch removes the -w swicth and add the warnings pragma to all perl sources missing it. It raises the severity of the TestingAndDebugging::RequireUseWarnings perlcritic policy to level 5, so that we catch any future violations. Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
* Cosmetic fixups for WAL usage work.Amit Kapila2020-04-13
| | | | | | | Reported-by: Justin Pryzby and Euler Taveira Author: Justin Pryzby and Julien Rouhaud Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
* Add contrib/amcheck debug message.Peter Geoghegan2020-04-10
| | | | | | | | | | | | | Add a DEBUG1 message indicating that verification of the index structure is underway. Also reduce the severity level of the existing "tree level" debug message to DEBUG1. It should never have been made DEBUG2. Any B-Tree index with more than a couple of levels will generally also have so many pages that the per-page DEBUG2 messages will become completely unmanageable. In passing, add a new "Tip" to the docs that advises users that run into corruption that the debug messages might provide useful additional context.
* Consider Incremental Sort paths at additional placesTomas Vondra2020-04-07
| | | | | | | | | | | | | | | | | | | Commit d2d8a229bc introduced Incremental Sort, but it was considered only in create_ordered_paths() as an alternative to regular Sort. There are many other places that require sorted input and might benefit from considering Incremental Sort too. This patch modifies a number of those places, but not all. The concern is that just adding Incremental Sort to any place that already adds Sort may increase the number of paths considered, negatively affecting planning time, without any benefit. So we've taken a more conservative approach, based on analysis of which places do affect a set of queries that did seem practical. This means some less common queries may not benefit from Incremental Sort yet. Author: Tomas Vondra Reviewed-by: James Coleman Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
* Introduce xid8-based functions to replace txid_XXX.Thomas Munro2020-04-07
| | | | | | | | | | | | | | | | | | | | | The txid_XXX family of fmgr functions exposes 64 bit transaction IDs to users as int8. Now that we have an SQL type xid8 for FullTransactionId, define a new set of functions including pg_current_xact_id() and pg_current_snapshot() based on that. Keep the old functions around too, for now. It's a bit sneaky to use the same C functions for both, but since the binary representation is identical except for the signedness of the type, and since older functions are the ones using the wrong signedness, and since we'll presumably drop the older ones after a reasonable period of time, it seems reasonable to switch to FullTransactionId internally and share the code for both. Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> Reviewed-by: Takao Fujii <btfujiitkp@oss.nttdata.com> Reviewed-by: Yoshikazu Imai <imai.yoshikazu@fujitsu.com> Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://postgr.es/m/20190725000636.666m5mad25wfbrri%40alap3.anarazel.de
* Add the option to report WAL usage in EXPLAIN and auto_explain.Amit Kapila2020-04-06
| | | | | | | | | | | | | | | This commit adds a new option WAL similar to existing option BUFFERS in the EXPLAIN command. This option allows to include information on WAL record generation added by commit df3b181499 in EXPLAIN output. This also allows the WAL usage information to be displayed via the auto_explain module. A new parameter auto_explain.log_wal controls whether WAL usage statistics are printed when an execution plan is logged. This parameter has no effect unless auto_explain.log_analyze is enabled. Author: Julien Rouhaud Reviewed-by: Dilip Kumar and Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
* Allow pg_stat_statements to track WAL usage statistics.Amit Kapila2020-04-05
| | | | | | | | | | | | This commit adds three new columns in pg_stat_statements output to display WAL usage statistics added by commit df3b181499. This commit doesn't bump the version of pg_stat_statements as the same is done for this release in commit 17e0328224. Author: Kirill Bychik and Julien Rouhaud Reviewed-by: Julien Rouhaud, Fujii Masao, Dilip Kumar and Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
* Skip WAL for new relfilenodes, under wal_level=minimal.Noah Misch2020-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, only selected bulk operations (e.g. COPY) did this. If a given relfilenode received both a WAL-skipping COPY and a WAL-logged operation (e.g. INSERT), recovery could lose tuples from the COPY. See src/backend/access/transam/README section "Skipping WAL for New RelFileNode" for the new coding rules. Maintainers of table access methods should examine that section. To maintain data durability, just before commit, we choose between an fsync of the relfilenode and copying its contents to WAL. A new GUC, wal_skip_threshold, guides that choice. If this change slows a workload that creates small, permanent relfilenodes under wal_level=minimal, try adjusting wal_skip_threshold. Users setting a timeout on COMMIT may need to adjust that timeout, and log_min_duration_statement analysis will reflect time consumption moving to COMMIT from commands like COPY. Internally, this requires a reliable determination of whether RollbackAndReleaseCurrentSubTransaction() would unlink a relation's current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the specification of rd_createSubid such that the field is zero when a new rel has an old rd_node. Make relcache.c retain entries for certain dropped relations until end of transaction. Bump XLOG_PAGE_MAGIC, since this introduces XLOG_GIST_ASSIGN_LSN. Future servers accept older WAL, so this bump is discretionary. Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert Haas. Heikki Linnakangas and Michael Paquier implemented earlier designs that materially clarified the problem. Reviewed, in earlier designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane, Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout. Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
* Fix bogus CALLED_AS_TRIGGER() defenses.Tom Lane2020-04-03
| | | | | | | | | | | | | | | | | | | | | contrib/lo's lo_manage() thought it could use trigdata->tg_trigger->tgname in its error message about not being called as a trigger. That naturally led to a core dump. unique_key_recheck() figured it could Assert that fcinfo->context is a TriggerData node in advance of having checked that it's being called as a trigger. That's harmless in production builds, and perhaps not that easy to reach in any case, but it's logically wrong. The first of these per bug #16340 from William Crowell; the second from manual inspection of other CALLED_AS_TRIGGER call sites. Back-patch the lo.c change to all supported branches, the other to v10 where the thinko crept in. Discussion: https://postgr.es/m/16340-591c7449dc7c8c47@postgresql.org
* Fix whitespacePeter Eisentraut2020-04-02
|
* Allow pg_stat_statements to track planning statistics.Fujii Masao2020-04-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit makes pg_stat_statements support new GUC pg_stat_statements.track_planning. If this option is enabled, pg_stat_statements tracks the planning statistics of the statements, e.g., the number of times the statement was planned, the total time spent planning the statement, etc. This feature is useful to check the statements that it takes a long time to plan. Previously since pg_stat_statements tracked only the execution statistics, we could not use that for the purpose. The planning and execution statistics are stored at the end of each phase separately. So there are not always one-to-one relationship between them. For example, if the statement is successfully planned but fails in the execution phase, only its planning statistics are stored. This may cause the users to be able to see different pg_stat_statements results from the previous version. To avoid this, pg_stat_statements.track_planning needs to be disabled. This commit bumps the version of pg_stat_statements to 1.8 since it changes the definition of pg_stat_statements function. Author: Julien Rouhaud, Pascal Legrand, Thomas Munro, Fujii Masao Reviewed-by: Sergei Kornilov, Tomas Vondra, Yoshikazu Imai, Haribabu Kommi, Tom Lane Discussion: https://postgr.es/m/CAHGQGwFx_=DO-Gu-MfPW3VQ4qC7TfVdH2zHmvZfrGv6fQ3D-Tw@mail.gmail.com Discussion: https://postgr.es/m/CAEepm=0e59Y_6Q_YXYCTHZkqOc6H2pJ54C_Xe=VFu50Aqqp_sA@mail.gmail.com Discussion: https://postgr.es/m/DB6PR0301MB21352F6210E3B11934B0DCC790B00@DB6PR0301MB2135.eurprd03.prod.outlook.com
* Clean up parsing of ltree and lquery some more.Tom Lane2020-04-01
| | | | | | | | | | | | | | | | Fix lquery parsing to handle repeated flag characters correctly, and to enforce the max label length correctly in some cases where it did not before, and to detect empty labels in some cases where it did not before. In a more cosmetic vein, use a switch rather than if-then chains to handle the different states, and avoid unnecessary checks on charlen when looking for ASCII characters, and factor out multiple copies of the label length checking code. Tom Lane and Dmitry Belyavsky Discussion: https://postgr.es/m/CADqLbzLVkBuPX0812o+z=c3i6honszsZZ6VQOSKR3VPbB56P3w@mail.gmail.com
* Add support for binary I/O of ltree, lquery, and ltxtquery types.Tom Lane2020-04-01
| | | | | | | | | | | | | | | | | | Not much to say here --- does what it says on the tin. The "binary" representation in each case is really just the same as the text format, though we prefix a version-number byte in case anyone ever feels motivated to change that. Thus, there's not any expectation of improved speed or reduced space; the point here is just to allow clients to use binary format for all columns of a query result or COPY data. This makes use of the recently added ALTER TYPE support to add binary I/O functions to an existing data type. As in commit a80818605, we can piggy-back on there already being a new-for-v13 version of the ltree extension, so we don't need a new update script file. Nino Floris, reviewed by Alexander Korotkov and myself Discussion: https://postgr.es/m/CANmj9Vxx50jOo1L7iSRxd142NyTz6Bdcgg7u9P3Z8o0=HGkYyQ@mail.gmail.com
* Improve selectivity estimation for assorted match-style operators.Tom Lane2020-04-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Quite a few matching operators such as JSONB's @> used "contsel" and "contjoinsel" as their selectivity estimators. That was a bad idea, because (a) contsel is only a stub, yielding a fixed default estimate, and (b) that default is 0.001, meaning we estimate these operators as five times more selective than equality, which is surely pretty silly. There's a good model for improving this in ltree's ltreeparentsel(): for any "var OP constant" query, we can try applying the operator to all of the column's MCV and histogram values, taking the latter as being a random sample of the non-MCV values. That code is actually 100% generic, except for the question of exactly what default selectivity ought to be plugged in when we don't have stats. Hence, migrate the guts of ltreeparentsel() into the core code, provide wrappers "matchingsel" and "matchingjoinsel" with a more-appropriate default estimate, and use those for the non-geometric operators that formerly used contsel (mostly JSONB containment operators and tsquery matching). Also apply this code to some match-like operators in hstore, ltree, and pg_trgm, including the former users of ltreeparentsel as well as ones that improperly used contsel. Since commit 911e70207 just created new versions of those extensions that we haven't released yet, we can sneak this change into those new versions instead of having to create an additional generation of update scripts. Patch by me, reviewed by Alexey Bashtanov Discussion: https://postgr.es/m/12237.1582833074@sss.pgh.pa.us
* Remove header noise from test_decoding testAlvaro Herrera2020-03-31
| | | | | | | | Use psql's expanded output to avoid a pointless header. Kyotaro Horiguchi, after an idea of Michael Paquier Discussion: https://postgr.es/m/20181120050744.GJ4400@paquier.xyz
* Fix lquery's NOT handling, and add ability to quantify non-'*' items.Tom Lane2020-03-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing implementation of the ltree ~ lquery match operator is sufficiently complex and undocumented that it's hard to tell exactly what it does. But one thing it clearly gets wrong is the combination of NOT symbols (!) and '*' symbols. A pattern such as '*.!foo.*' should, by any ordinary understanding of regular expression behavior, match any ltree that has at least one label that's not "foo". As best we can tell by experimentation, what it's actually matching is any ltree in which *no* label is "foo". That's surprising, and not at all what the documentation says. Now, that's arguably a useful behavior, so if we rewrite to fix the bug we should provide some other way to get it. To do so, add the ability to attach lquery quantifiers to non-'*' items as well as '*'s. Then the pattern '!foo{,}' expresses "any ltree in which no label is foo". For backwards compatibility, the default quantifier for non-'*' items has to be "{1}", although the default for '*' items is '{,}'. I wouldn't have done it like that in a green field, but it's not totally horrible. Armed with that, rewrite checkCond() from scratch. Treating '*' and non-'*' items alike makes it simpler, not more complicated, so that the function actually gets a lot shorter than it was. Filip Rembiałkowski, Tom Lane, Nikita Glukhov, per a very ancient bug report from M. Palm Discussion: https://postgr.es/m/CAP_rww=waX2Oo6q+MbMSiZ9ktdj6eaJj0cQzNu=Ry2cCDij5fw@mail.gmail.com
* Improve error messages in ltree_in and lquery_in.Tom Lane2020-03-31
| | | | | | | | | | | | | | | Ensure that the type name is mentioned in all cases (bare "syntax error" isn't that helpful). Avoid using the term "level", since that's not used in the documentation. Phrase error position reports as "at character N" not "in position N"; the latter seems ambiguous, and it's certainly not how we say it elsewhere. For the same reason, make the character position values be 1-based not 0-based. Provide a position in more cases. (I continued to leave that out of messages complaining about end-of-input, where it seemed pointless, as well as messages complaining about overall input complexity, where fingering any one part of the input would be arbitrary.) Discussion: https://postgr.es/m/15582.1585529626@sss.pgh.pa.us
* Implement operator class parametersAlexander Korotkov2020-03-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PostgreSQL provides set of template index access methods, where opclasses have much freedom in the semantics of indexing. These index AMs are GiST, GIN, SP-GiST and BRIN. There opclasses define representation of keys, operations on them and supported search strategies. So, it's natural that opclasses may be faced some tradeoffs, which require user-side decision. This commit implements opclass parameters allowing users to set some values, which tell opclass how to index the particular dataset. This commit doesn't introduce new storage in system catalog. Instead it uses pg_attribute.attoptions, which is used for table column storage options but unused for index attributes. In order to evade changing signature of each opclass support function, we implement unified way to pass options to opclass support functions. Options are set to fn_expr as the constant bytea expression. It's possible due to the fact that opclass support functions are executed outside of expressions, so fn_expr is unused for them. This commit comes with some examples of opclass options usage. We parametrize signature length in GiST. That applies to multiple opclasses: tsvector_ops, gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and gist_hstore_ops. Also we parametrize maximum number of integer ranges for gist__int_ops. However, the main future usage of this feature is expected to be json, where users would be able to specify which way to index particular json parts. Catversion is bumped. Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru Author: Nikita Glukhov, revised by me Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera
* Expose BufferUsageAccumDiff().Fujii Masao2020-03-30
| | | | | | | | | | | | | | | | Previously pg_stat_statements calculated the difference of buffer counters by its own code even while BufferUsageAccumDiff() had the same code. This commit expose BufferUsageAccumDiff() and makes pg_stat_statements use it for the calculation, in order to simply the code. This change also would be useful for the upcoming patch for the planning counters in pg_stat_statements because the patch will add one more code for the calculation of difference of buffer counters and that can easily be done by using BufferUsageAccumDiff(). Author: Julien Rouhaud Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/bdfee4e0-a304-2498-8da5-3cb52c0a193e@oss.nttdata.com
* Cosmetic improvements in ltree code.Tom Lane2020-03-29
| | | | | | | | | Add more comments in ltree.h, and correct a misstatement or two. Use a symbol, rather than hardwired constants, for the maximum length of an ltree label. The max length is still hardwired in the associated error messages, but I want to clean that up as part of a separate patch to improve the error messages.
* Fix lquery's behavior for consecutive '*' items.Tom Lane2020-03-28
| | | | | | | | | | | | | | | | | | | | | | Something like "*{2}.*{3}" should presumably mean the same as "*{5}", but it didn't. Improve that. Get rid of an undocumented and remarkably ugly (though not, as far as I can tell, actually unsafe) static variable in favor of passing more arguments to checkCond(). Reverse-engineer some commentary. This function, like all of ltree, is still far short of what I would consider the minimum acceptable level of internal documentation, but at least now it has more than zero comments. Although this certainly seems like a bug fix, people might not thank us for changing query behavior in stable branches, so no back-patch. Nikita Glukhov, with cosmetic improvements by me Discussion: https://postgr.es/m/CAP_rww=waX2Oo6q+MbMSiZ9ktdj6eaJj0cQzNu=Ry2cCDij5fw@mail.gmail.com
* Protect against overflow of ltree.numlevel and lquery.numlevel.Tom Lane2020-03-28
| | | | | | | | | | | | | | | | | | | | | | These uint16 fields could be overflowed by excessively long input, producing strange results. Complain for invalid input. Likewise check for out-of-range values of the repeat counts in lquery. (We don't try too hard on that one, notably not bothering to detect if atoi's result has overflowed.) Also detect length overflow in ltree_concat. In passing, be more consistent about whether "syntax error" messages include the type name. Also, clarify the documentation about what the size limit is. This has been broken for a long time, so back-patch to all supported branches. Nikita Glukhov, reviewed by Benjie Gillam and Tomas Vondra Discussion: https://postgr.es/m/CAP_rww=waX2Oo6q+MbMSiZ9ktdj6eaJj0cQzNu=Ry2cCDij5fw@mail.gmail.com
* Revert "Skip WAL for new relfilenodes, under wal_level=minimal."Noah Misch2020-03-22
| | | | | | | | This reverts commit cb2fd7eac285b1b0a24eeb2b8ed4456b66c5a09f. Per numerous buildfarm members, it was incompatible with parallel query, and a test case assumed LP64. Back-patch to 9.5 (all supported versions). Discussion: https://postgr.es/m/20200321224920.GB1763544@rfd.leadboat.com
* Skip WAL for new relfilenodes, under wal_level=minimal.Noah Misch2020-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, only selected bulk operations (e.g. COPY) did this. If a given relfilenode received both a WAL-skipping COPY and a WAL-logged operation (e.g. INSERT), recovery could lose tuples from the COPY. See src/backend/access/transam/README section "Skipping WAL for New RelFileNode" for the new coding rules. Maintainers of table access methods should examine that section. To maintain data durability, just before commit, we choose between an fsync of the relfilenode and copying its contents to WAL. A new GUC, wal_skip_threshold, guides that choice. If this change slows a workload that creates small, permanent relfilenodes under wal_level=minimal, try adjusting wal_skip_threshold. Users setting a timeout on COMMIT may need to adjust that timeout, and log_min_duration_statement analysis will reflect time consumption moving to COMMIT from commands like COPY. Internally, this requires a reliable determination of whether RollbackAndReleaseCurrentSubTransaction() would unlink a relation's current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the specification of rd_createSubid such that the field is zero when a new rel has an old rd_node. Make relcache.c retain entries for certain dropped relations until end of transaction. Back-patch to 9.5 (all supported versions). This introduces a new WAL record type, XLOG_GIST_ASSIGN_LSN, without bumping XLOG_PAGE_MAGIC. As always, update standby systems before master systems. This changes sizeof(RelationData) and sizeof(IndexStmt), breaking binary compatibility for affected extensions. (The most recent commit to affect the same class of extensions was 089e4d405d0f3b94c74a2c6a54357a84a681754b.) Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert Haas. Heikki Linnakangas and Michael Paquier implemented earlier designs that materially clarified the problem. Reviewed, in earlier designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane, Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout. Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org