aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Account for collation when coercing the output of a SQL function.Tom Lane2020-04-14
| | | | | | | Commit 913bbd88d overlooked that the result of coerce_to_target_type might need collation fixups. Per report from Andreas Joseph Krogh. Discussion: https://postgr.es/m/VisenaEmail.72.37d08ec2b8cb8fb5.17179940cd3@tc7-visena
* Set Perl search path more idiomaticallyAndrew Dunstan2020-04-14
| | | | | | | Back in commits 1df92eeafe, f884a96819, and 592123efbb I used some hackish code to set the script search path, unaware despite decades of perl that there was a completely standard way to do this. This patch changes those cases to use the standard perl FindBin package.
* Rearrange _bt_insertonpg() "update metapage" code.Peter Geoghegan2020-04-14
| | | | | | Nest the "update metapage as part of insert into root-like page" branch inside the broader "insert into internal page" branch. This improves readability.
* Fix collection of typos and grammar mistakes in the tree, volume 2Michael Paquier2020-04-14
| | | | | | | | This fixes some comments and documentation new as of Postgres 13, and is a follow-up of the work done in dd0f37e. Author: Justin Pryzby Discussion: https://postgr.es/m/20200408165653.GF2228@telsasoft.com
* Add defensive "split_only_page" nbtree assertion.Peter Geoghegan2020-04-13
| | | | | | | | | | | | | | Clearly it's not okay for nbtree to split a page that is the only page on its level, and then find that it has to split the parent one level up in turn. There is simply no code to handle the split_only_page case in the _bt_insertonpg() "newitem won't fit" branch (only the "newitem fits" branch handles split_only_page). Add a defensive assertion that will fail if a split_only_page call to _bt_insertonpg() somehow ends up splitting the target/parent page. I (pgeoghegan) believe that we don't need split_only_page handling for the "newitem won't fit" branch because anybody calling _bt_insertonpg() like this would have to hold a lock on the same one and only child page.
* Comments and doc fixes for commit 40d964ec99.Amit Kapila2020-04-14
| | | | | | | Reported-by: Justin Pryzby Author: Justin Pryzby, with few changes by me Reviewed-by: Amit Kapila and Sawada Masahiko Discussion: https://postgr.es/m/20200322021801.GB2563@telsasoft.com
* Make _bt_insertonpg() more like _bt_split().Peter Geoghegan2020-04-13
| | | | | | | | | | It seems like a good idea for nbtree's retail insert code to be absolutely consistent with nbtree's page split code for anything that naturally requires equivalent handling. Anything that concerns inserting newitem (which is handled as part of the page split atomic action when a page split is required) should work in exactly the same way. With that in mind, make _bt_insertonpg() handle 'cbuf' in a way that matches _bt_split().
* Harmonize nbtree page split point code.Peter Geoghegan2020-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An nbtree split point can be thought of as a point between two adjoining tuples from an imaginary version of the page being split that includes the incoming/new item (in addition to the items that really are on the page). These adjoining tuples are called the lastleft and firstright tuples. The variables that represent split points contained a field called firstright, which is an offset number of the first data item from the original page that goes on the new right page. The corresponding tuple from origpage was usually the same thing as the actual firstright tuple, but not always: the firstright tuple is sometimes the new/incoming item instead. This situation seems unnecessarily confusing. Make things clearer by renaming the origpage offset returned by _bt_findsplitloc() to "firstrightoff". We now have a firstright tuple and a firstrightoff offset number which are comparable to the newitem/lastleft tuples and the newitemoff/lastleftoff offset numbers respectively. Also make sure that we are consistent about how we describe nbtree page split point state. Push the responsibility for dealing with pg_upgrade'd !heapkeyspace indexes down to lower level code, relieving _bt_split() from dealing with it directly. This means that we always have a palloc'd left page high key on the leaf level, no matter what. This enables simplifying some of the code (and code comments) within _bt_split(). Finally, restructure the page split code to make it clearer why suffix truncation (which only takes place during leaf page splits) is completely different to the first data item truncation that takes place during internal page splits. Tuples are marked as having fewer attributes stored in both cases, and the firstright tuple is truncated in both cases, so it's easy to imagine somebody missing the distinction.
* Use perl warnings pragma consistentlyAndrew Dunstan2020-04-13
| | | | | | | | | | We've had a mixture of the warnings pragma, the -w switch on the shebang line, and no warnings at all. This patch removes the -w swicth and add the warnings pragma to all perl sources missing it. It raises the severity of the TestingAndDebugging::RequireUseWarnings perlcritic policy to level 5, so that we catch any future violations. Discussion: https://postgr.es/m/20200412074245.GB623763@rfd.leadboat.com
* Cosmetic fixups for WAL usage work.Amit Kapila2020-04-13
| | | | | | | Reported-by: Justin Pryzby and Euler Taveira Author: Justin Pryzby and Julien Rouhaud Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
* Improve error messages after LoadLibrary()Peter Eisentraut2020-04-13
| | | | | Move the file name to a format parameter to ease translatability. Add error code where missing. Make the wording consistent.
* Suppress -Wimplicit-fallthrough warning in new LIMIT WITH TIES code.Tom Lane2020-04-11
| | | | | | | | | | | The placement of the fall-through comment in this code appears not to work to suppress the warning in recent gcc. Move it to the bottom of the case group, and add an assertion that we didn't get there through some other code path. Also improve wording of nearby comments. Julien Rouhaud, comment hacking by me Discussion: https://postgr.es/m/CAOBaU_aLdPGU5wCpaowNLF-Q8328iR7mj1yJAhMOVsdLwY+sdg@mail.gmail.com
* Optimize RelationFindReplTupleSeq() for CLOBBER_CACHE_ALWAYS.Noah Misch2020-04-11
| | | | | | | | | Specifically, remember lookup_type_cache() results instead of retrieving them once per comparison. Under CLOBBER_CACHE_ALWAYS, this reduced src/test/subscription/t/001_rep_changes.pl elapsed time by an order of magnitude, which reduced check-world elapsed time by 9%. Discussion: https://postgr.es/m/20200406085420.GC162712@rfd.leadboat.com
* When WalSndCaughtUp, sleep only in WalSndWaitForWal().Noah Misch2020-04-11
| | | | | | | | | | | | | Before sleeping, WalSndWaitForWal() sends a keepalive if MyWalSnd->write < sentPtr. That is important in logical replication. When the latest physical LSN yields no logical replication messages (a common case), that keepalive elicits a reply, and processing the reply updates pg_stat_replication.replay_lsn. WalSndLoop() lacks that; when WalSndLoop() slept, replay_lsn advancement could stall until wal_receiver_status_interval elapsed. This sometimes stalled src/test/subscription/t/001_rep_changes.pl for up to 10s. Discussion: https://postgr.es/m/20200406063649.GA3738151@rfd.leadboat.com
* Make EXPLAIN report maximum hashtable usage across multiple rescans.Tom Lane2020-04-11
| | | | | | | | | | | | | | | Before discarding the old hash table in ExecReScanHashJoin, capture its statistics, ensuring that we report the maximum hashtable size across repeated rescans of the hash input relation. We can repurpose the existing code for reporting hashtable size in parallel workers to help with this, making the patch pretty small. This also ensures that if rescans happen within parallel workers, we get the correct maximums across all instances. Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro of a trouble report from Alvaro Herrera. Discussion: https://postgr.es/m/20200323165059.GA24950@alvherre.pgsql
* Clear dangling pointer to avoid bogus EXPLAIN printout in a corner case.Tom Lane2020-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | ExecReScanHashJoin will destroy the join's hash table if it expects that the inner relation will produce different rows on rescan. Up to now it's not bothered to clear the additional pointer to that hash table that exists in the child HashState node. However, it's possible for the query to terminate without building a fresh hash table (this happens if the outer relation is found to be empty during the final rescan). So we can end with a dangling pointer to a deleted hash table. That was harmless originally, but since 9.0 EXPLAIN ANALYZE has used that pointer to print hash table statistics. In debug builds this reproducibly results in garbage statistics. In non-debug builds there's frequently no ill effects, but in principle one could get wrong EXPLAIN ANALYZE output, or perhaps even a crash if free() has released the hashtable memory back to the OS. To fix, just make sure we clear the additional pointer when destroying the hash table. In problematic cases, EXPLAIN ANALYZE will then print no hashtable statistics (reverting to its pre-9.0 behavior). This isn't ideal, but since the problem manifests only in unusual corner cases, it's hard to justify taking any risks to do better in the back branches. A follow-on patch will improve matters in HEAD. Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro of a trouble report from Alvaro Herrera. Discussion: https://postgr.es/m/20200323165059.GA24950@alvherre.pgsql
* Fix RELCACHE_FORCE_RELEASE issuePeter Eisentraut2020-04-11
| | | | | | | Introduced by 83fd4532a72179c370e318075a10e0e2aa832024. To fix, the tuple descriptors need to be copied into the current memory context. Discussion: https://www.postgresql.org/message-id/04d78603-edae-9243-9dde-fe3037176a7d@2ndquadrant.com
* Fix relcache reference leakPeter Eisentraut2020-04-11
| | | | Introduced by 83fd4532a72179c370e318075a10e0e2aa832024
* Suppress unused-variable warning.Tom Lane2020-04-10
| | | | | | Ashutosh Bapat Discussion: https://postgr.es/m/CAG-ACPWPB8Lc_aFj25eiPFqi31YB5vmaZnb39mbHSf5Yej=miA@mail.gmail.com
* Fix collection of typos and grammar mistakes in the treeMichael Paquier2020-04-10
| | | | | | | This fixes some comments and documentation new as of Postgres 13. Author: Justin Pryzby Discussion: https://postgr.es/m/20200408165653.GF2228@telsasoft.com
* Further cleanup of ts_headline code.Tom Lane2020-04-09
| | | | | | | | | | | | Suppress a probably-meaningless uninitialized-variable warning (induced by my previous patch, I'm sorry to say). Improve mark_hl_fragments()'s test for overlapping cover strings: it failed to consider the possibility that the current string is strictly within another one. That's unlikely given the preceding splitting into MaxWords fragments, but I don't think it's impossible. Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
* Fix default text search parser's ts_headline code for phrase queries.Tom Lane2020-04-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This code could produce very poor results when asked to highlight a string based on a query using phrase-match operators. The root cause is that hlCover(), which is supposed to find a minimal substring that matches the query, was written assuming that word position is not significant. I'm only 95% convinced that its algorithm was correct even for plain AND/OR queries; but it definitely fails completely for phrase matches, causing it to possibly not identify a cover string at all. Hence, rewrite hlCover() with a less-tense algorithm that just tries all the possible substrings, earlier and shorter ones first. (This is not as bad as it sounds performance-wise, because all of the string matching has been done already: the repeated tsquery match checks boil down to pointer comparisons.) Unfortunately, since that approach produces more candidate cover strings than before, it also exposes that there were bugs in the heuristics in mark_hl_words() for selecting a best cover string. Fixes there include: * Do not apply the ShortWord filter to words that appear in the query. * Remove a misguided optimization for quickly rejecting a cover. * Fix order-of-operation bug that could cause computation of a wrong figure of merit (poslen) when shortening a cover. * Change the preference rule so that candidate headlines that do not include their whole cover string (after MaxWords trimming) are lowest priority, since they may not actually satisfy the user's query. This results in some changes in existing regression test cases, but they all seem reasonable. Note in particular that the tests involving strings like "1 2 3" were previously being affected by the ShortWord filter, masking the normal matching behavior. Per bug #16345 from Augustinas Jokubauskas; the new test cases are based on that example. Back-patch to 9.6 where phrase search was added to tsquery. Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
* Cosmetic improvements for default text search parser's ts_headline code.Tom Lane2020-04-09
| | | | | | | | | | | | | This code was woefully unreadable and under-commented. Try to improve matters by adding comments, using some macros to make complicated if-tests more readable, using boolean type where appropriate, etc. There are a couple of tiny coding improvements too, but this commit includes (I hope) no behavioral change. Nonetheless, back-patch as far as 9.6, because a followup bug-fixing commit depends on this. Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
* Fix CREATE TABLE LIKE INCLUDING GENERATED column order issuePeter Eisentraut2020-04-09
| | | | | | | | | | | | | | | | | CREATE TABLE LIKE INCLUDING GENERATED would fail if a generated column referred to a column with a higher attribute number. This is because the column mapping mechanism created the mapping incrementally as columns are added. This was sufficient for previous uses of that mechanism (omitting dropped columns), and it also happened to work if generated columns only referred to columns with lower attribute numbers, but here it failed. This fix is to build the attribute mapping in a separate loop before processing the columns in detail. Bug: #16342 Reported-by: Ethan Waldo <ewaldo@healthetechs.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Exclude backup_manifest file that existed in database, from BASE_BACKUP.Fujii Masao2020-04-09
| | | | | | | | | | | | | If there is already a backup_manifest file in the database cluster, it belongs to the past backup that was used to start this server. It is not correct for the backup being taken now. So this commit changes pg_basebackup so that it always skips such backup_manifest file. The backup_manifest file for the current backup will be injected separately if users want it. Author: Fujii Masao Reviewed-by: Robert Haas Discussion: https://postgr.es/m/78f76a3d-1a28-a97d-0394-5c96985dd1c0@oss.nttdata.com
* Allow parallel create index to accumulate buffer usage stats.Amit Kapila2020-04-09
| | | | | | | | | | | | | | Currently, we don't account for buffer usage incurred by parallel workers for parallel create index.  This commit allows each worker to record the buffer usage stats and leader backend to accumulate that stats at the end of the operation.  This will allow pg_stat_statements to display correct buffer usage stats for (parallel) create index command. Reported-by: Julien Rouhaud Author: Sawada Masahiko Reviewed-by: Dilip Kumar, Julien Rouhaud and Amit Kapila Backpatch-through: 11, where this was introduced Discussion: https://postgr.es/m/20200328151721.GB12854@nol
* Rationalize GetWalRcv{Write,Flush}RecPtr().Thomas Munro2020-04-08
| | | | | | | | | | | | | | | | GetWalRcvWriteRecPtr() previously reported the latest *flushed* location. Adopt the conventional terminology used elsewhere in the tree by renaming it to GetWalRcvFlushRecPtr(), and likewise for some related variables that used the term "received". Add a new definition of GetWalRcvWriteRecPtr(), which returns the latest *written* value. This will allow later patches to use the value for non-data-integrity purposes, without having to wait for the flush pointer to advance. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
* Allow publishing partition changes via ancestorsPeter Eisentraut2020-04-08
| | | | | | | | | | | | | | | To control whether partition changes are replicated using their own identity and schema or an ancestor's, add a new parameter that can be set per publication named 'publish_via_partition_root'. This allows replicating a partitioned table into a different partition structure on the subscriber. Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Rafia Sabih <rafia.pghackers@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Reviewed-by: Petr Jelinek <petr@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
* Revert 0f5ca02f53Alexander Korotkov2020-04-08
| | | | | | | | 0f5ca02f53 introduces 3 new keywords. It appears to be too much for relatively small feature. Given now we past feature freeze, it's already late for discussion of the new syntax. So, revert. Discussion: https://postgr.es/m/28209.1586294824%40sss.pgh.pa.us
* Modify additional power 2 calculations to use new helper functionsDavid Rowley2020-04-08
| | | | | | | | | | | | 2nd pass of modifying various places which obtain the next power of 2 of a number and make them use the new functions added in f0705bb62. In passing, also modify num_combinations(). This can be implemented using simple bitshifting rather than looping. Reviewed-by: John Naylor Discussion: https://postgr.es/m/20200114173553.GE32763%40fetter.org
* Fix crash when using COLLATE in partition bound expressionsMichael Paquier2020-04-08
| | | | | | | | | | | | | | | Attempting to use a COLLATE clause with a type that it not collatable in a partition bound expression could crash the server. This commit fixes the code by adding more checks similar to what is done when computing index or partition attributes by making sure that there is a collation iff the type is collatable. Backpatch down to 12, as 7c079d7 introduced this problem. Reported-by: Alexander Lakhin Author: Dmitry Dolgov Discussion: https://postgr.es/m/16325-809194cf742313ab@postgresql.org Backpatch-through: 12
* Modify various power 2 calculations to use new helper functionsDavid Rowley2020-04-08
| | | | | | | | | | | | | | First pass of modifying various places that obtain the next power of 2 of a number and make them use the new functions added in pg_bitutils.h instead. This also removes the _hash_log2() function. There are no longer any callers in core. Other users can swap their _hash_log2(n) call to make use of pg_ceil_log2_32(n). Author: David Fetter, with some minor adjustments by me Reviewed-by: John Naylor, Jesse Zhang Discussion: https://postgr.es/m/20200114173553.GE32763%40fetter.org
* Create memory context for HashAgg with a reasonable maxBlockSize.Jeff Davis2020-04-07
| | | | | | | | | | | | | | | | If the memory context's maxBlockSize is too big, a single block allocation can suddenly exceed work_mem. For Hash Aggregation, this can mean spilling to disk too early or reporting a confusing memory usage number for EXPLAN ANALYZE. Introduce CreateWorkExprContext(), which is like CreateExprContext(), except that it creates the AllocSet with a maxBlockSize that is reasonable in proportion to work_mem. Right now, CreateWorkExprContext() is only used by Hash Aggregation, but it may be generally useful in the future. Discussion: https://postgr.es/m/412a3fbf306f84d8d78c4009e11791867e62b87c.camel@j-davis.com
* Put back mistakenly removed #include.Tom Lane2020-04-08
| | | | | | | | In commit 4dbcb3f84 I removed some code from parse_coerce.c, and also removed some apparently-no-longer-needed #includes. But removing datum.h broke some not-compiled-by-default code. Discussion: https://postgr.es/m/20200407205436.pyjhddw5bn5upvsu@development
* Support PrefetchBuffer() in recovery.Thomas Munro2020-04-08
| | | | | | | | | | | | | | | Provide PrefetchSharedBuffer(), a variant that takes SMgrRelation, for use in recovery. Rename LocalPrefetchBuffer() to PrefetchLocalBuffer() for consistency. Add a return value to all of these. In recovery, tolerate and report missing files, so we can handle relations unlinked before crash recovery began. Also report cache hits and misses, so that callers can do faster buffer lookups and better I/O accounting. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
* Allow partitionwise join to handle nested FULL JOIN USING cases.Tom Lane2020-04-07
| | | | | | | | | | | | | | | | This case didn't work because columns merged by FULL JOIN USING are represented in the parse tree by COALESCE expressions, and the logic for recognizing a partitionable join failed to match upper-level join clauses to such expressions. To fix, synthesize suitable COALESCE expressions and add them to the nullable_partexprs lists. This is pretty ugly and brute-force, but it gets the job done. (I have ambitions of rethinking the way outer-join output Vars are represented, so maybe that will provide a cleaner solution someday. For now, do this.) Amit Langote, reviewed by Justin Pryzby, Richard Guo, and myself Discussion: https://postgr.es/m/CA+HiwqG2WVUGmLJqtR0tPFhniO=H=9qQ+Z3L_ZC+Y3-EVQHFGg@mail.gmail.com
* Allow partitionwise joins in more cases.Etsuro Fujita2020-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | Previously, the partitionwise join technique only allowed partitionwise join when input partitioned tables had exactly the same partition bounds. This commit extends the technique to some cases when the tables have different partition bounds, by using an advanced partition-matching algorithm introduced by this commit. For both the input partitioned tables, the algorithm checks whether every partition of one input partitioned table only matches one partition of the other input partitioned table at most, and vice versa. In such a case the join between the tables can be broken down into joins between the matching partitions, so the algorithm produces the pairs of the matching partitions, plus the partition bounds for the join relation, to allow partitionwise join for computing the join. Currently, the algorithm works for list-partitioned and range-partitioned tables, but not hash-partitioned tables. See comments in partition_bounds_merge(). Ashutosh Bapat and Etsuro Fujita, most of regression tests by Rajkumar Raghuwanshi, some of the tests by Mark Dilger and Amul Sul, reviewed by Dmitry Dolgov and Amul Sul, with additional review at various points by Ashutosh Bapat, Mark Dilger, Robert Haas, Antonin Houska, Amit Langote, Justin Pryzby, and Tomas Vondra Discussion: https://postgr.es/m/CAFjFpRdjQvaUEV5DJX3TW6pU5eq54NCkadtxHX2JiJG_GvbrCA@mail.gmail.com
* Fix circle_in to accept "(x,y),r" as it's advertised to do.Tom Lane2020-04-07
| | | | | | | | | | | Our documentation describes four allowed input syntaxes for circles, but the regression tests tried only three ... with predictable consequences. Remarkably, this has been wrong since the circle datatype was added in 1997, but nobody noticed till now. David Zhang, with some help from me Discussion: https://postgr.es/m/332c47fa-d951-7574-b5cc-a8f7f7201202@highgo.ca
* snapshot scalability: Move delayChkpt from PGXACT to PGPROC.Andres Freund2020-04-07
| | | | | | | | | | | | | | | | | | | | | The goal of separating hotly accessed per-backend data from PGPROC into PGXACT is to make accesses fast (GetSnapshotData() in particular). But delayChkpt is not actually accessed frequently; only when starting a checkpoint. As it is frequently modified (multiple times in the course of a single transaction), storing it in the same cacheline as hotly accessed data unnecessarily dirties a contended cacheline. Therefore move delayChkpt to PGPROC. This is part of a larger series of patches intending to improve GetSnapshotData() scalability. It is committed and pushed separately, as it is independently beneficial (small but measurable win, limited by the other frequent modifications of PGXACT). Author: Andres Freund Reviewed-By: Robert Haas, Thomas Munro, David Rowley Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
* Track SLRU page hits in SimpleLruReadPage_ReadOnlyTomas Vondra2020-04-08
| | | | | | | | | SLRU page hits were tracked only in SimpleLruReadPage, but that's not enough because we may hit the page in SimpleLruReadPage_ReadOnly in which case we don't call SimpleLruReadPage at all. Reported-by: Kuntal Ghosh Discussion: https://postgr.es/m/20200119143707.gyinppnigokesjok@development
* Fix XLogReader FD leak that makes backends unusable after 2PC usage.Andres Freund2020-04-07
| | | | | | | | | | | | | | | | | | | | | | | | Before the fix every 2PC commit/abort leaked a file descriptor. As the files are opened using BasicOpenFile(), that quickly leads to the backend running out of file descriptors. Once enough 2PC abort/commit have caused enough FDs to leak, any IO in the backend will fail with "Too many open files", as BasicOpenFilePerm() will have triggered all open files known to fd.c to be closed. The leak causing the problem at hand is a consequence of 0dc8ead46, but is only exascerbated by it. Previously most XLogPageReadCB callbacks used static variables to cache one open file, but after the commit the cache is private to each XLogReader instance. There never was infrastructure to close FDs at the time of XLogReaderFree, but the way XLogReader was used limited the leak to one FD. This commit just closes the during XLogReaderFree() if the FD is stored in XLogReaderState.seg.ws_segno. This may not be the way to solve this medium/long term, but at least unbreaks 2PC. Discussion: https://postgr.es/m/20200406025651.fpzdb5yyb7qyhqko@alap3.anarazel.de
* Remove nbtree BTreeTupleSetAltHeapTID() function.Peter Geoghegan2020-04-07
| | | | | | | Since heap TID is supposed to be just another key attribute to the implementation, it doesn't make much sense to have separate BTreeTupleSetNAtts() and BTreeTupleSetAltHeapTID() functions. Merge the two functions together. This slightly simplifies _bt_truncate().
* Allow users to limit storage reserved by replication slotsAlvaro Herrera2020-04-07
| | | | | | | | | | | | | | | | Replication slots are useful to retain data that may be needed by a replication system. But experience has shown that allowing them to retain excessive data can lead to the primary failing because of running out of space. This new feature allows the user to configure a maximum amount of space to be reserved using the new option max_slot_wal_keep_size. Slots that overrun that space are invalidated at checkpoint time, enabling the storage to be released. Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Jehan-Guillaume de Rorthais <jgdr@dalibo.com> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20170228.122736.123383594.horiguchi.kyotaro@lab.ntt.co.jp
* Implement waiting for given lsn at transaction startAlexander Korotkov2020-04-07
| | | | | | | | | | | | | | | | | | | | This commit adds following optional clause to BEGIN and START TRANSACTION commands. WAIT FOR LSN lsn [ TIMEOUT timeout ] New clause pospones transaction start till given lsn is applied on standby. This clause allows user be sure, that changes previously made on primary would be visible on standby. New shared memory struct is used to track awaited lsn per backend. Recovery process wakes up backend once required lsn is applied. Author: Ivan Kartyshov, Anna Akenteva Reviewed-by: Craig Ringer, Thomas Munro, Robert Haas, Kyotaro Horiguchi Reviewed-by: Masahiko Sawada, Ants Aasma, Dmitry Ivanov, Simon Riggs Reviewed-by: Amit Kapila, Alexander Korotkov Discussion: https://postgr.es/m/0240c26c-9f84-30ea-fca9-93ab2df5f305%40postgrespro.ru
* Support FETCH FIRST WITH TIESAlvaro Herrera2020-04-07
| | | | | | | | | | | | | | | | | | WITH TIES is an option to the FETCH FIRST N ROWS clause (the SQL standard's spelling of LIMIT), where you additionally get rows that compare equal to the last of those N rows by the columns in the mandatory ORDER BY clause. There was a proposal by Andrew Gierth to implement this functionality in a more powerful way that would yield more features, but the other patch had not been finished at this time, so we decided to use this one for now in the spirit of incremental development. Author: Surafel Temesgen <surafel3000@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Discussion: https://postgr.es/m/CALAY4q9ky7rD_A4vf=FVQvCGngm3LOes-ky0J6euMrg=_Se+ag@mail.gmail.com Discussion: https://postgr.es/m/87o8wvz253.fsf@news-spur.riddles.org.uk
* Adjust bytea get_bit/set_bit to use int8 not int4 for bit numbering.Tom Lane2020-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | Since the existing bit number argument can't exceed INT32_MAX, it's not possible for these functions to manipulate bits beyond the first 256MB of a bytea value. Lift that restriction by redeclaring the bit number arguments as int8 (which requires a catversion bump, hence is not back-patchable). The similarly-named functions for bit/varbit don't really have a problem because we restrict those types to at most VARBITMAXLEN bits; hence leave them alone. While here, extend the encode/decode functions in utils/adt/encode.c to allow dealing with values wider than 1GB. This is not a live bug or restriction in current usage, because no input could be more than 1GB, and since none of the encoders can expand a string more than 4X, the result size couldn't overflow uint32. But it might be desirable to support more in future, so make the input length values size_t and the potential-output-length values uint64. Also add some test cases to improve the miserable code coverage of these functions. Movead Li, editorialized some by me; also reviewed by Ashutosh Bapat Discussion: https://postgr.es/m/20200312115135445367128@highgo.ca
* Remove debugging elog from pgstat_recv_resetslrucounterTomas Vondra2020-04-07
| | | | Reported-by: Thomas Munro
* Minor improvements in Incremental Sort explainTomas Vondra2020-04-07
| | | | | | | | | Some places still used "Maximum" instead of "Peak" when displaying info about sort space, so fix that. Also, add a comment clarifying why it's correct to check the number of full/prefix sort groups. Author: James Coleman Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
* Prevent archive recovery from scanning non-existent WAL files.Fujii Masao2020-04-08
| | | | | | | | | | | | | | | | | | | Previously when there were multiple timelines listed in the history file of the recovery target timeline, archive recovery searched all of them, starting from the newest timeline to the oldest one, to find the segment to read. That is, archive recovery had to continuously fail scanning the segment until it reached the timeline that the segment belonged to. These scans for non-existent segment could be harmful on the recovery performance especially when archival area was located on the remote storage and each scan could take a long time. To address the issue, this commit changes archive recovery so that it skips scanning the timeline that the segment to read doesn't belong to. Author: Kyotaro Horiguchi, tweaked a bit by Fujii Masao Reviewed-by: David Steele, Pavel Suderevsky, Grigory Smolkin Discussion: https://postgr.es/m/16159-f5a34a3a04dc67e0@postgresql.org Discussion: https://postgr.es/m/20200129.120222.1476610231001551715.horikyota.ntt@gmail.com
* Consider Incremental Sort paths at additional placesTomas Vondra2020-04-07
| | | | | | | | | | | | | | | | | | | Commit d2d8a229bc introduced Incremental Sort, but it was considered only in create_ordered_paths() as an alternative to regular Sort. There are many other places that require sorted input and might benefit from considering Incremental Sort too. This patch modifies a number of those places, but not all. The concern is that just adding Incremental Sort to any place that already adds Sort may increase the number of paths considered, negatively affecting planning time, without any benefit. So we've taken a more conservative approach, based on analysis of which places do affect a set of queries that did seem practical. This means some less common queries may not benefit from Incremental Sort yet. Author: Tomas Vondra Reviewed-by: James Coleman Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com