aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix incorrect tests for SRFs in relation_can_be_sorted_early().Tom Lane2022-08-03
| | | | | | | | | | | | | | | | | | | | | | | | | Commit fac1b470a thought we could check for set-returning functions by testing only the top-level node in an expression tree. This is wrong in itself, and to make matters worse it encouraged others to make the same mistake, by exporting tlist.c's special-purpose IS_SRF_CALL() as a widely-visible macro. I can't find any evidence that anyone's taken the bait, but it was only a matter of time. Use expression_returns_set() instead, and stuff the IS_SRF_CALL() genie back in its bottle, this time with a warning label. I also added a couple of cross-reference comments. After a fair amount of fooling around, I've despaired of making a robust test case that exposes the bug reliably, so no test case here. (Note that the test case added by fac1b470a is itself broken, in that it doesn't notice if you remove the code change. The repro given by the bug submitter currently doesn't fail either in v15 or HEAD, though I suspect that may indicate an unrelated bug.) Per bug #17564 from Martijn van Oosterhout. Back-patch to v13, as the faulty patch was. Discussion: https://postgr.es/m/17564-c7472c2f90ef2da3@postgresql.org
* Reduce test runtime of src/test/modules/snapshot_too_old.Tom Lane2022-08-03
| | | | | | | | | | | | | | | | | | | The sto_using_cursor and sto_using_select tests were coded to exercise every permutation of their test steps, but AFAICS there is no value in exercising more than one. This matters because each permutation costs about six seconds, thanks to the "pg_sleep(6)". Perhaps we could reduce that, but the useless permutations seem worth getting rid of in any case. (Note that sto_using_hash_index got it right already.) While here, clean up some other sloppiness such as an unused table. This doesn't make too much difference in interactive testing, since the wasted time is typically masked by parallelization with other tests. However, the buildfarm runs this as a serial step, which means we can expect to shave ~40 seconds from every buildfarm run. That makes it worth back-patching. Discussion: https://postgr.es/m/2515192.1659454702@sss.pgh.pa.us
* Add wait_for_subscription_sync for TAP tests.Amit Kapila2022-08-03
| | | | | | | | | | | | | | | | | | | | The TAP tests for logical replication in src/test/subscription are using the following code in many places to make sure that the subscription is synchronized with the publisher: $node_publisher->wait_for_catchup('tap_sub'); $node_subscriber->poll_query_until('postgres', qq[SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's')]); The new function wait_for_subscription_sync() can be used to replace the above code. This eliminates duplicated code and makes it easier to write future tests. Author: Masahiko Sawada Reviewed by: Amit Kapila, Shi yu Discussion: https://postgr.es/m/CAD21AoC-fvAkaKHa4t1urupwL8xbAcWRePeETvshvy80f6WV1A@mail.gmail.com
* Remove unused fields from ExprEvalStepDavid Rowley2022-08-03
| | | | | | | These were added recently by 1349d2790. Reported-by: Zhihong Yu Discussion: https://postgr.es/m/CALNJ-vTi+YDuAWKp4Z_Dv=mrz=aq81qTg0D7wzc8y7rS_+i_cw@mail.gmail.com
* Change type "char"'s I/O format for non-ASCII characters.Tom Lane2022-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously, a byte with the high bit set was just transmitted as-is by charin() and charout(). This is problematic if the database encoding is multibyte, because the result of charout() won't be validly encoded, which breaks various stuff that expects all text strings to be validly encoded. We've previously decided to enforce encoding validity rather than try to individually harden each place that might have a problem with such strings, so it's time to do something about "char". To fix, represent high-bit-set characters as \ooo (backslash and three octal digits), following the ancient "escape" format for bytea. charin() will continue to accept the old way as well, though that is only reachable in single-byte encodings. Add some test cases just so there is coverage for this code. We'll otherwise leave this question undocumented as it was before, because we don't really want to encourage end-user use of "char". For the moment, back-patch into v15 so that this change appears in 15beta3. If there's not great pushback we should consider absorbing this change into the older branches. Discussion: https://postgr.es/m/2318797.1638558730@sss.pgh.pa.us
* Improve performance of ORDER BY / DISTINCT aggregatesDavid Rowley2022-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ORDER BY / DISTINCT aggreagtes have, since implemented in Postgres, been executed by always performing a sort in nodeAgg.c to sort the tuples in the current group into the correct order before calling the transition function on the sorted tuples. This was not great as often there might be an index that could have provided pre-sorted input and allowed the transition functions to be called as the rows come in, rather than having to store them in a tuplestore in order to sort them once all the tuples for the group have arrived. Here we change the planner so it requests a path with a sort order which supports the most amount of ORDER BY / DISTINCT aggregate functions and add new code to the executor to allow it to support the processing of ORDER BY / DISTINCT aggregates where the tuples are already sorted in the correct order. Since there can be many ORDER BY / DISTINCT aggregates in any given query level, it's very possible that we can't find an order that suits all of these aggregates. The sort order that the planner chooses is simply the one that suits the most aggregate functions. We take the most strictly sorted variation of each order and see how many aggregate functions can use that, then we try again with the order of the remaining aggregates to see if another order would suit more aggregate functions. For example: SELECT agg(a ORDER BY a),agg2(a ORDER BY a,b) ... would request the sort order to be {a, b} because {a} is a subset of the sort order of {a,b}, but; SELECT agg(a ORDER BY a),agg2(a ORDER BY c) ... would just pick a plan ordered by {a} (we give precedence to aggregates which are earlier in the targetlist). SELECT agg(a ORDER BY a),agg2(a ORDER BY b),agg3(a ORDER BY b) ... would choose to order by {b} since two aggregates suit that vs just one that requires input ordered by {a}. Author: David Rowley Reviewed-by: Ronan Dunklau, James Coleman, Ranier Vilela, Richard Guo, Tom Lane Discussion: https://postgr.es/m/CAApHDvpHzfo92%3DR4W0%2BxVua3BUYCKMckWAmo-2t_KiXN-wYH%3Dw%40mail.gmail.com
* Move common catalog cache access routines to lsyscache.cAmit Kapila2022-08-02
| | | | | | | | | In passing, move pg_relation_is_publishable next to similar functions. Suggested-by: Alvaro Herrera Author: Amit Kapila Reviewed-by: Hou Zhijie Discussion: https://postgr.es/m/CAHut+PupQ5UW9A9ut0Yjt21J9tHhx958z5L0k8-9hTYf_NYqxA@mail.gmail.com
* Fix comment in pg_db_role_setting.hJohn Naylor2022-08-02
| | | | | | Noted by Japin Li Discussion: https://www.postgresql.org/message-id/MEYP282MB16691ACEDBC94161CF4BA1CCB69A9%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
* Remove duplicated wait for subscription sync from 007_ddl.pl.Amit Kapila2022-08-02
| | | | | | | | | An oversight in 8f2e2bbf14. Author: Masahiko Sawada Reviewed by: Amit Kapila Backpatch-through: 15, where it was introduced Discussion: https://postgr.es/m/CAD21AoC-fvAkaKHa4t1urupwL8xbAcWRePeETvshvy80f6WV1A@mail.gmail.com
* Relax overly strict rules in select_outer_pathkeys_for_merge()David Rowley2022-08-02
| | | | | | | | | | | | | | | | | | The select_outer_pathkeys_for_merge function made an attempt to build the merge join pathkeys in the same order as query_pathkeys. This was done as it may have led to no sort being required for an ORDER BY or GROUP BY clause in the upper planner. However, this restriction seems overly strict as it required that we match the query_pathkeys entirely or we don't bother putting the merge join pathkeys in that order. Here we relax this rule so that we use a prefix of the query_pathkeys providing that prefix matches all of the join quals. This may provide the upper planner with partially sorted input which will allow the use of incremental sorts instead of full sorts. Author: David Rowley Reviewed-by: Richard Guo Discussion: https://postgr.es/m/CAApHDvrtZu0PHVfDPFM4Yx3jNR2Wuwosv+T2zqa7LrhhBr2rRg@mail.gmail.com
* Have ExecFindPartition cache the last found partitionDavid Rowley2022-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here we add code which detects when ExecFindPartition() continually finds the same partition and add a caching layer to improve partition lookup performance for such cases. Both RANGE and LIST partitioned tables traditionally require a binary search for the set of Datums that a partition needs to be found for. This binary search is commonly visible in profiles when bulk loading into a partitioned table. Here we aim to reduce the overhead of bulk-loading into partitioned tables for cases where many consecutive tuples belong to the same partition and make the performance of this operation closer to what it is with a traditional non-partitioned table. When we find the same partition 16 times in a row, the next search will result in us simply just checking if the current set of values belongs to the last found partition. For LIST partitioning we record the index into the PartitionBoundInfo's datum array. This allows us to check if the current Datum is the same as the Datum that was last looked up. This means if any given LIST partition supports storing multiple different Datum values, then the caching only works when we find the same value as we did the last time. For RANGE partitioning we simply check if the given Datums are in the same range as the previously found partition. We store the details of the cached partition in PartitionDesc (i.e. relcache) so that the cached values are maintained over multiple statements. No caching is done for HASH partitions. The majority of the cost in HASH partition lookups are in the hashing function(s), which would also have to be executed if we were to try to do caching for HASH partitioned tables. Since most of the cost is already incurred, we just don't bother. We also don't do any caching for LIST partitions when we continually find the values being looked up belong to the DEFAULT partition. We've no corresponding index in the PartitionBoundInfo's datum array for this case. We also don't cache when we find the given values match to a LIST partitioned table's NULL partition. This is so cheap that there's no point in doing any caching for this. We also don't cache for a RANGE partitioned table's DEFAULT partition. There have been a number of different patches submitted to improve partition lookups. Hou, Zhijie submitted a patch to detect when the value belonging to the partition key column(s) were constant and added code to cache the partition in that case. Amit Langote then implemented an idea suggested by me to remember the last found partition and start to check if the current values work for that partition. The final patch here was written by me and was done by taking many of the ideas I liked from the patches in the thread and redesigning other aspects. Discussion: https://postgr.es/m/OS0PR01MB571649B27E912EA6CC4EEF03942D9%40OS0PR01MB5716.jpnprd01.prod.outlook.com Author: Amit Langote, Hou Zhijie, David Rowley Reviewed-by: Amit Langote, Hou Zhijie
* Check maximum number of columns in function RTEs, too.Tom Lane2022-08-01
| | | | | | | | | | | | | | | I thought commit fd96d14d9 had plugged all the holes of this sort, but no, function RTEs could produce oversize tuples too, either via long coldeflists or just from multiple functions in one RTE. (I'm pretty sure the other variants of base RTEs aren't a problem, because they ultimately refer to either a table or a sub-SELECT, whose widths are enforced elsewhere. But we explicitly allow join RTEs to be overwidth, as long as you don't try to form their tuple result.) Per further discussion of bug #17561. As before, patch all branches. Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
* Fix error reporting after ioctl() call with pg_upgrade --cloneMichael Paquier2022-08-01
| | | | | | | | | | | | errno was not reported correctly after attempting to clone a file, leading to incorrect error reports. While scanning through the code, I have not noticed any similar mistakes. Error introduced in 3a769d8. Author: Justin Pryzby Discussion: https://postgr.es/m/20220731134135.GY15006@telsasoft.com Backpatch-through: 12
* Append -X to direct invocation of psql in new test for BASE_BACKUPMichael Paquier2022-08-01
| | | | | | | Per buildfarm member wrasse, that looks to open a transaction when it loads its .psqlrc, causing the test to fail. Oversight in ad34146.
* Add more TAP tests with BASE_BACKUP and pg_backup_start/stopMichael Paquier2022-08-01
| | | | | | | | | | | | | | | | This commit adds some test coverage for ee79647 (prevent BASE_BACKUP from running in the middle of another base backup) and b24b2be (BASE_BACKUP cancellation followed by pg_backup_start), caused by the interactions of replication and SQL commands in a logical replication connection in a WAL sender. The second test uses a design close to what has been introduced in 0475a97f, where BASE_BACKUP is throttled to give enough room for a cancellation, though this time we rely on psql with multiple -c switches to keep a connection around for the second query. Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/Ys/NCI4Eo9300GnQ@paquier.xyz
* Remove test_oat_hooks.c's nodetag_to_string().Tom Lane2022-07-31
| | | | | | | | | | | | | | | | | | | | | In the short time this function has existed, it's already proven to be a nontrivial maintenance burden, since it has to be updated whenever a node tag is added or removed. Although in principle we could now automate that, I see little justification for having such functionality here at all. The function is only being applied to utility statements, for which we already have infrastructure for obtaining string names. Moreover, that infrastructure produces already-familiar-to-users names, unlike nodetag_to_string(). So, remove this function and use the existing infrastructure instead. That saves over a thousand lines of largely-unreachable code. Back-patch to v15 where this code came in. Although it seems unlikely that v15's nodetag list will change anymore, we might as well keep the two branches looking and acting alike; otherwise back-patching any test-results changes in this area will be painful. Discussion: https://postgr.es/m/843818.1659218928@sss.pgh.pa.us
* Add --schema and --exclude-schema options to vacuumdb.Andrew Dunstan2022-07-31
| | | | | | | | | | | These two new options can be used to either process all tables in specific schemas or to skip processing all tables in specific schemas. This change also refactors the handling of invalid combinations of command-line options to a new helper function. Author: Gilles Darold Reviewed-by: Justin Pryzby, Nathan Bossart and Michael Paquier. Discussion: https://postgr.es/m/929fbf3c-24b8-d454-811f-1d5898ab3e91%40migops.com
* Fix trim_array() for zero-dimensional array argument.Tom Lane2022-07-31
| | | | | | | | | | | | | | The code tried to access ARR_DIMS(v)[0] and ARR_LBOUND(v)[0] whether or not those values exist. This made the range check on the "n" argument unstable --- it might or might not fail, and if it did it would report garbage for the allowed upper limit. These bogus accesses would probably annoy Valgrind, and if you were very unlucky even lead to SIGSEGV. Report and fix by Martin Kalcher. Back-patch to v14 where this function was added. Discussion: https://postgr.es/m/baaeb413-b8a8-4656-5757-ef347e5ec11f@aboutsource.net
* Feed ObjectAddress to event triggers for ALTER TABLE ATTACH/DETACHMichael Paquier2022-07-31
| | | | | | | | | | | | These flavors of ALTER TABLE were already shaped to report the ObjectAddress of the partition attached or detached, but this data was not added to what is collected for event triggers. The tests of test_ddl_deparse are updated to show the modification in the data reported. Author: Hou Zhijie Reviewed-by: Álvaro Herrera, Amit Kapila, Hayato Kuroda, Michael Paquier Discussion: https://postgr.es/m/OS0PR01MB571626984BD099DADF53F38394899@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Expand tests of test_ddl_deparse/ for ALTER TABLEMichael Paquier2022-07-31
| | | | | | | | | | | | | | | | | | | | | | This module is expanded to track the description of the objects changed in the subcommands of ALTER TABLE by reworking the function get_altertable_subcmdtypes() (now named get_altertable_subcmdinfo) used in the event trigger of the test. It now returns a set of rows made of (subcommand type, object description) instead of a text array with only the information about the subcommand type. The tests have been lacking a lot of the subcommands added to AlterTableType over the years. All the missing subcommands are added, and the code is now structured so as the addition of a new subcommand is detected by removing the default clause used in the switch for the subcommand types. The coverage of the module is increased from roughly 30% to 50%. More could be done but this is already a nice improvement. Author: Michael Paquier, Hou Zhijie Reviewed-by: Álvaro Herrera, Amit Kapila, Hayato Kuroda Discussion: https://postgr.es/m/OS0PR01MB571626984BD099DADF53F38394899@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Improve regression test coverage of GiST index building.Tom Lane2022-07-30
| | | | | | | | | | Add a test case that exercises the "buffering build" code path. This covers almost all the non-error-case lines in gistbuild.c and gistbuildbuffers.c. Matheus Alcantara, based on earlier work by Pavel Borisov Discussion: https://postgr.es/m/3z8Fde-IHbW57a7bEZtaf19f4YOCWu67IZoWJoGW18rKD9R16ZHHchf4d7KFI3Yg7-0N4NonFuwKEgh98HjMCZYoVx7KOioPo6Wn2nZRpf4=@pm.me
* Fix incorrect is-this-the-topmost-join tests in parallel planning.Tom Lane2022-07-30
| | | | | | | | | | | | | | | | | | | | Two callers of generate_useful_gather_paths were testing the wrong thing when deciding whether to call that function: they checked for being at the top of the current join subproblem, rather than being at the actual top join. This'd result in failing to construct parallel paths for a sub-join for which they might be useful. While set_rel_pathlist() isn't actively broken, it seems best to make its identical-in-intention test for this be like the other two. This has been wrong all along, but given the lack of field complaints I'm hesitant to back-patch into stable branches; we usually prefer to avoid non-bug-fix changes in plan choices in minor releases. It seems not too late for v15 though. Richard Guo, reviewed by Antonin Houska and Tom Lane Discussion: https://postgr.es/m/CAMbWs4-mH8Zf87-w+3P2J=nJB+5OyicO28ia9q_9o=Lamf_VHg@mail.gmail.com
* Adjust new pg_read_file() test cases for more portability.Tom Lane2022-07-30
| | | | | | | | | | It's allowed for an installation to remove postgresql.auto.conf, so don't rely on that being present. Instead probe whether we can read postmaster.pid. (If you've removed that, you broke the data directory's multiple-postmaster interlock, not to mention pg_ctl.) Per gripe from Michael Paquier. Discussion: https://postgr.es/m/YuSZTsoBMObyY+vT@paquier.xyz
* Revise test case added in 43746996399541ecb5c7b188725a5f097c15ceae.Robert Haas2022-07-29
| | | | | | | | | | | | | | | | | | | | Instead of using command_ok() to run psql, use safe_psql(). wrasse isn't happy, and it be because of failure to pass -X to the psql invocation, which safe_psql() will do automatically. Since safe_psql() returns standard output instead of writing it to a file, this requires some changes to the incantation for running 'diff'. Test against the 'regression' database rather than 'postgres' so we test more than just one table. That also means we need to record the horizons later, after the test does "VACUUM FULL pg_largeobject". Add an ORDER BY clause to the horizon query for stability. Patch by me, reviewed by Tom Lane. Discussion: http://postgr.es/m/CA+TgmoaGBbpzgu3=du1f9zDUbkfycO0y=_uWrLFy=KKEqXWeLQ@mail.gmail.com
* Fix new recovery test for log_error_verbosity=verbose caseAndrew Dunstan2022-07-29
| | | | | | | The new test is from commit 9e4f914b5e. With this setting messages have SQL error numbers included, so that needs to be provided for in the pattern looked for.
* Fix brown paper bag bug in bbe08b8869bd29d587f24ef18eb45c7d4d14afca.Robert Haas2022-07-29
| | | | | | | | | | | We must issue the TRUNCATE command first and update relfrozenxid and relminmxid afterward; otherwise, TRUNCATE overwrites the previously-set values. Add a test case like I should have done the first time. Per buildfarm report from TestUpgradeXversion.pm, by way of Tom Lane.
* Support pg_read_[binary_]file (filename, missing_ok).Tom Lane2022-07-29
| | | | | | | | | | | | | | | | | | There wasn't an especially nice way to read all of a file while passing missing_ok = true. Add an additional overloaded variant to support that use-case. While here, refactor the C code to avoid a rats-nest of PG_NARGS checks, instead handling the argument collection in the outer wrapper functions. It's a bit longer this way, but far more straightforward. (Upon looking at the code coverage report for genfile.c, I was impelled to also add a test case for pg_stat_file() -- tgl) Kyotaro Horiguchi Discussion: https://postgr.es/m/20220607.160520.1984541900138970018.horikyota.ntt@gmail.com
* In transformRowExpr(), check for too many columns in the row.Tom Lane2022-07-29
| | | | | | | | | | | | | | | | | | | | A RowExpr with more than MaxTupleAttributeNumber columns would fail at execution anyway, since we cannot form a tuple datum with more than that many columns. While heap_form_tuple() has a check for too many columns, it emerges that there are some intermediate bits of code that don't check and can be driven to failure with sufficiently many columns. Checking this at parse time seems like the most appropriate place to install a defense, since we already check SELECT list length there. While at it, make the SELECT-list-length error use the same errcode (TOO_MANY_COLUMNS) as heap_form_tuple does, rather than the generic PROGRAM_LIMIT_EXCEEDED. Per bug #17561 from Egor Chindyaskin. The given test case crashes in all supported branches (and probably a lot further back), so patch all. Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
* Fix mistake in bbe08b8869bd29d587f24ef18eb45c7d4d14afca.Robert Haas2022-07-29
| | | | | | | | | | | | The earlier commit used pg_class.relfilenode where it should have used pg_class.oid. This could lead to emitting an UPDATE statement into the dump that would update nothing (or the wrong thing) when executed in the new cluster, resulting in relfrozenxid and relminmxid being improperly carried forward for pg_largeobject. Noticed by Dilip Kumar. Discussion: http://postgr.es/m/CAFiTN-ty1Gzs6stk2vt9BJiq0m0hzf=aPnh3a-4Z3Tk5GzoENw@mail.gmail.com
* Fix test instabilityAlvaro Herrera2022-07-29
| | | | | | | | | | On FreeBSD, the new test fails due to a WAL file being removed before the standby has had the chance to copy it. Fix by adding a replication slot to prevent the removal until after the standby has connected. Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reported-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/CAEze2Wj5nau_qpjbwihvmXLfkAWOZ5TKdbnqOc6nKSiRJEoPyQ@mail.gmail.com
* Move related functions next to each other in pg_publication.c.Amit Kapila2022-07-29
| | | | | | | | This also improves comments atop is_publishable_class(). Author: Peter Smith Reviewed-by: Amit Kapila, Hou Zhijie Discussion: https://postgr.es/m/CAHut+PupQ5UW9A9ut0Yjt21J9tHhx958z5L0k8-9hTYf_NYqxA@mail.gmail.com
* Use TRUNCATE to preserve relfilenode for pg_largeobject + index.Robert Haas2022-07-28
| | | | | | | | | | | | | | | | | | | | | | Commit 9a974cbcba005256a19991203583a94b4f9a21a9 arranged to preserve the relfilenode of user tables across pg_upgrade, but failed to notice that pg_upgrade treats pg_largeobject as a user table and thus it needs the same treatment. Otherwise, large objects will appear to vanish after a pg_upgrade. Commit d498e052b4b84ae21b3b68d5b3fda6ead65d1d4d fixed this problem by teaching pg_dump to UPDATE pg_class.relfilenode for pg_largeobject and its index. However, because an UPDATE on the catalog rows doesn't change anything on disk, this can leave stray files behind in the new cluster. They will normally be empty, but it's a little bit untidy. Hence, this commit arranges to do the same thing using DDL. Specifically, it makes TRUNCATE work for the pg_largeobject catalog when in binary-upgrade mode, and it then uses that command in binary-upgrade dumps as a way of setting pg_class.relfilenode for pg_largeobject and its index. That way, the old files are removed from the new cluster. Discussion: http://postgr.es/m/CA+TgmoYYMXGUJO5GZk1-MByJGu_bB8CbOL6GJQC8=Bzt6x6vDg@mail.gmail.com
* Improve speed of hash index build.Tom Lane2022-07-28
| | | | | | | | | | | | | In the initial data sort, if the bucket numbers are the same then next sort on the hash value. Because index pages are kept in hash value order, this gains a little speed by allowing the eventual tuple insertions to be done sequentially, avoiding repeated data movement within PageAddItem. This seems to be good for overall speedup of 5%-9%, depending on the incoming data. Simon Riggs, reviewed by Amit Kapila Discussion: https://postgr.es/m/CANbhV-FG-1ZNMBuwhUF7AxxJz3u5137dYL-o6hchK1V_dMw86g@mail.gmail.com
* Clean up some residual confusion between OIDs and RelFileNumbers.Robert Haas2022-07-28
| | | | | | | | | | | | Commit b0a55e43299c4ea2a9a8c757f9c26352407d0ccc missed a few places where we are referring to the number used as a part of the relation filename as an "OID". We now want to call that a "RelFileNumber". Some of these places actually made it sound like the OID in question is pg_class.oid rather than pg_class.relfilenode, which is especially good to clean up. Dilip Kumar with some editing by me.
* Fix replay of create database records on standbyAlvaro Herrera2022-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Crash recovery on standby may encounter missing directories when replaying database-creation WAL records. Prior to this patch, the standby would fail to recover in such a case; however, the directories could be legitimately missing. Consider the following sequence of commands: CREATE DATABASE DROP DATABASE DROP TABLESPACE If, after replaying the last WAL record and removing the tablespace directory, the standby crashes and has to replay the create database record again, crash recovery must be able to continue. A fix for this problem was already attempted in 49d9cfc68bf4, but it was reverted because of design issues. This new version is based on Robert Haas' proposal: any missing tablespaces are created during recovery before reaching consistency. Tablespaces are created as real directories, and should be deleted by later replay. CheckRecoveryConsistency ensures they have disappeared. The problems detected by this new code are reported as PANIC, except when allow_in_place_tablespaces is set to ON, in which case they are WARNING. Apart from making tests possible, this gives users an escape hatch in case things don't go as planned. Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Author: Asim R Praveen <apraveen@pivotal.io> Author: Paul Guo <paulguo@gmail.com> Reviewed-by: Anastasia Lubennikova <lubennikovaav@gmail.com> (older versions) Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> (older versions) Reviewed-by: Michaël Paquier <michael@paquier.xyz> Diagnosed-by: Paul Guo <paulguo@gmail.com> Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com
* Fix comment in procarray.c.Fujii Masao2022-07-28
| | | | | | | | | Commit fea10a6434 renamed VariableCacheData.nextFullXid to nextXid. But commit dc7420c2c9 introduced the comment mentioning nextFullXid. This commit changes"nextFullXid" to "nextXid" in the comment. Author: Zhang Mingli Discussion: https://postgr.es/m/642BA615-4B28-4B0C-BDF6-4D33E366BCDF@gmail.com
* Fix get_dirent_type() for symlinks on MinGW/MSYS.Thomas Munro2022-07-28
| | | | | | | | | | | | | | | | | | | | | | On Windows with MSVC, get_dirent_type() was recently made to return DT_LNK for junction points by commit 9d3444dc, which fixed some defective dirent.c code. On Windows with Cygwin, get_dirent_type() already worked for symlinks, as it does on POSIX systems, because Cygwin has its own fake symlinks that behave like POSIX (on closer inspection, Cygwin's dirent has the BSD d_type extension but it's probably always DT_UNKNOWN, so we fall back to lstat(), which understands Cygwin symlinks with S_ISLNK()). On Windows with MinGW/MSYS, we need extra code, because the MinGW runtime has its own readdir() without d_type, and the lstat()-based fallback has no knowledge of our convention for treating junctions as symlinks. Back-patch to 14, where get_dirent_type() landed. Reported-by: Andrew Dunstan <andrew@dunslane.net> Discussion: https://postgr.es/m/b9ddf605-6b36-f90d-7c30-7b3e95c46276%40dunslane.net
* Bump catversion for commit d8cd0c6c95c0120168df93aae095df4e0682a08a.Robert Haas2022-07-27
| | | | | | | | | The catalog contents haven't changed, but it's good to make clear that initdb is required. Changing RELMAPPER_FILEMAGIC would be more appropriate, but that doesn't actually produce a useful diagnostic, so cheat by doing this instead. Discussion: http://postgr.es/m/20220727171939.6ixixqcjt5riil2o@alvherre.pgsql
* Convert macros to static inline functions (buf_internals.h)Robert Haas2022-07-27
| | | | | | Dilip Kumar, reviewed by Vignesh C, Ashutosh Sharma, and me. Discussion: http://postgr.es/m/CAFiTN-tYbM7D+2UGiNc2kAFMSQTa5FTeYvmg-Vj2HvPdVw2Gvg@mail.gmail.com
* Fix read_relmap_file() concurrency on Windows.Robert Haas2022-07-27
| | | | | | | | | | | | Commit d8cd0c6c95c0120168df93aae095df4e0682a08a introduced a file rename that could fail on Windows, probably due to other backends having an open file handle to the old file of the same name. Re-arrange the locking slightly to prevent that, by making sure the open() and close() run while we hold the lock. Thomas Munro. I added an explanatory comment. Discussion: https://postgr.es/m/CA%2BhUKGLZtCTgp4NTWV-wGbR2Nyag71%3DEfYTKjDKnk%2BfkhuFMHw%40mail.gmail.com
* Refactor code in charge of grabbing the relations of a subscriptionMichael Paquier2022-07-27
| | | | | | | | | | | | | | GetSubscriptionRelations() and GetSubscriptionNotReadyRelations() share mostly the same code, which scans pg_subscription_rel and fetches all the relations of a given subscription. The only difference is that the second routine looks for all the relations not in a ready state. This commit refactors the code to use a single routine, shaving a bit of code. Author: Vignesh C Reviewed-By: Kyotaro Horiguchi, Amit Kapila, Michael Paquier, Peter Smith Discussion: https://postgr.es/m/CALDaNm0eW-9g4G_EzHebnFT5zZoasWCS_EzZQ5BgnLZny9S=pg@mail.gmail.com
* Split tuplesortvariants.c from tuplesort.cAlexander Korotkov2022-07-27
| | | | | | | | | | | | This commit puts the implementation of Tuple sort variants into the separate file tuplesortvariants.c. That gives better separation of the code and serves well as the demonstration that Tuple sort variant can be defined outside of tuplesort.c. Discussion: https://postgr.es/m/CAPpHfdvjix0Ahx-H3Jp1M2R%2B_74P-zKnGGygx4OWr%3DbUQ8BNdw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Pavel Borisov, Maxim Orlov, Matthias van de Meent Reviewed-by: Andres Freund, John Naylor
* Split TuplesortPublic from TuplesortstateAlexander Korotkov2022-07-27
| | | | | | | | | | | | | The new TuplesortPublic data structure contains the definition of sort-variant-specific interface methods and the part of Tuple sort operation state required by their implementations. This will let define Tuple sort variants without knowledge of Tuplesortstate, that is without knowledge of generic sort implementation guts. Discussion: https://postgr.es/m/CAPpHfdvjix0Ahx-H3Jp1M2R%2B_74P-zKnGGygx4OWr%3DbUQ8BNdw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Pavel Borisov, Maxim Orlov, Matthias van de Meent Reviewed-by: Andres Freund, John Naylor
* Move memory management away from writetup() and tuplesort_put*()Alexander Korotkov2022-07-27
| | | | | | | | | | | | This commit puts some generic work away from sort-variant-specific function. In particular, tuplesort_put*() now doesn't need to decrease available memory and switch to sort context before calling puttuple_common(). writetup() doesn't need to free SortTuple.tuple and increase available memory. Discussion: https://postgr.es/m/CAPpHfdvjix0Ahx-H3Jp1M2R%2B_74P-zKnGGygx4OWr%3DbUQ8BNdw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Pavel Borisov, Maxim Orlov, Matthias van de Meent Reviewed-by: Andres Freund, John Naylor
* Put abbreviation logic into puttuple_common()Alexander Korotkov2022-07-27
| | | | | | | | | | | | Abbreviation code is very similar along tuplesort_put*() functions. This commit unifies that code and puts it into puttuple_common(). tuplesort_put*() functions differs in the abbreviation condition, so it has been added as an argument to the puttuple_common() function. Discussion: https://postgr.es/m/CAPpHfdvjix0Ahx-H3Jp1M2R%2B_74P-zKnGGygx4OWr%3DbUQ8BNdw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Pavel Borisov, Maxim Orlov, Matthias van de Meent Reviewed-by: Andres Freund, John Naylor
* Add new Tuplesortstate.removeabbrev functionAlexander Korotkov2022-07-27
| | | | | | | | | | | | | This commit is the preparation to move abbreviation logic into puttuple_common(). The new removeabbrev function turns datum1 representation of SortTuple's from the abbreviated key to the first column value. Therefore, it encapsulates the differential part of abbreviation handling code in tuplesort_put*() functions, making these functions similar. Discussion: https://postgr.es/m/CAPpHfdvjix0Ahx-H3Jp1M2R%2B_74P-zKnGGygx4OWr%3DbUQ8BNdw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Pavel Borisov, Maxim Orlov, Matthias van de Meent Reviewed-by: Andres Freund, John Naylor
* Remove Tuplesortstate.copytup functionAlexander Korotkov2022-07-27
| | | | | | | | | | | | | | It's currently unclear how do we split functionality between Tuplesortstate.copytup() function and tuplesort_put*() functions. For instance, copytup_index() and copytup_datum() return error while tuplesort_putindextuplevalues() and tuplesort_putdatum() do their work. This commit removes Tuplesortstate.copytup() altogether, putting the corresponding code into tuplesort_put*(). Discussion: https://postgr.es/m/CAPpHfdvjix0Ahx-H3Jp1M2R%2B_74P-zKnGGygx4OWr%3DbUQ8BNdw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Pavel Borisov, Maxim Orlov, Matthias van de Meent Reviewed-by: Andres Freund, John Naylor
* Add overflow protection for block-related data in WAL recordsMichael Paquier2022-07-27
| | | | | | | | | | | | | | | | | | | | | | | | | | XLogRecordBlockHeader, the header holding the information for the data related to a block, tracks the length of the data appended to the WAL record with data_length (uint16). This limitation in size was not enforced by the public routine in charge of registering the data assembled later to form the WAL record inserted, XLogRegisterBufData(). Incorrectly used, it could lead to the generation of records with some of its data overflowed. This commit adds some safeguards to prevent that for the block data, complaining immediately if attempting to add to a record block information with a size larger than UINT16_MAX, which is the limit implied by the internal logic. Note that this also adjusts XLogRegisterData() and XLogRegisterBufData() so as the length of the WAL record data given by the caller is unsigned, matching with what gets stored in XLogRecData->len. Extracted from a larger patch by the same author. The original patch includes more protections when assembling a record in full that will be looked at separately later. Author: Matthias van de Meent Reviewed-by: Andres Freund, Heikki Linnakangas, Michael Paquier, David Zhang Discussion: https://postgr.es/m/CAEze2WgGiw+LZt+vHf8tWqB_6VxeLsMeoAuod0N=ij1q17n5pw@mail.gmail.com
* Improve makeArrayTypeName's algorithm for choosing array type names.Tom Lane2022-07-26
| | | | | | | | | | | | | | | | | | | | | As before, we start by prepending one underscore (truncating the base name if necessary). But if there is a conflict, then instead of prepending more and more underscores, append an underscore and some digits, in much the same way that ChooseRelationName does. While the previous logic could be driven to fail by creating a lot of types with long names differing only near the end, this version seems certain enough to eventually succeed that we can remove the failure code path that was there before. While at it, undo 6df7a9698's decision to split this code out of makeArrayTypeName. That wasn't actually accomplishing anything, because no other function was using it --- and it would have been wrong to do so. The convention that a prefix "_" means an array, not something else, is too ancient to mess with. Andrey Lepikhov and Dmitry Koval, reviewed by Masahiko Sawada and myself Discussion: https://postgr.es/m/b84cd82c-cc67-198a-8b1c-60f44e1259ad@postgrespro.ru
* Fix brain fade in e530be2c5ce77475d56ccf8f4e0c4872b666ad5f.Robert Haas2022-07-26
| | | | | | | | | The BoolGetDatum() call ended up in the wrong place. It should be applied when we, err, want to convert a bool to a datum. Thanks to Tom Lane for noticing this. Discussion: http://postgr.es/m/2511599.1658861964@sss.pgh.pa.us