aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Clean up some misplaced comments in partition_join.sql regression test.Etsuro Fujita2019-12-16
| | | | | | | | Also, add a comment explaining a test case. Back-patch to 11 where the regression test was added. Discussion: https://postgr.es/m/CAPmGK15adZPh2B%2BmGUjSOMH%2BH39ogDRWfCfm4G6jncZCAs9V_Q%40mail.gmail.com
* Remove duplicated progress reporting during heap scan of VACUUMMichael Paquier2019-12-15
| | | | | | | | | This has been introduced by c16dc1a since progress reporting for VACUUM has been added. As this issue just causes some extra work and is harmless, no backpatch is done. Author: Justin Pryzby Discussion: https://postgr.es/m/20191213030831.GT2082@telsasoft.com
* Try to stabilize results of new tuplesort regression test.Tom Lane2019-12-14
| | | | | | | | | It appears that a concurrent autovacuum/autoanalyze run can cause changes in the plans expected by this test. To prevent that, change the tables it uses to be temp tables --- there's no need for them to be permanent, and this should save a few cycles too. Discussion: https://postgr.es/m/3244.1576160824@sss.pgh.pa.us
* Prevent overly-aggressive collapsing of joins to RTE_RESULT relations.Tom Lane2019-12-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The RTE_RESULT simplification logic added by commit 4be058fe9 had a flaw: it would collapse out a RTE_RESULT that is due to compute a PlaceHolderVar, and reassign the PHV to the parent join level, even if another input relation of the join contained a lateral reference to the PHV. That can't work because the PHV would be computed too late. In practice it led to failures of internal sanity checks later in planning (either assertion failures or errors such as "failed to construct the join relation"). To fix, add code to check for the presence of such PHVs in relevant portions of the query tree. Notably, this required refactoring range_table_walker so that a caller could ask to walk individual RTEs not the whole list. (It might be a good idea to refactor range_table_mutator in the same way, if only to keep those functions looking similar; but I didn't do so here as it wasn't necessary for the bug fix.) This exercise also taught me that find_dependent_phvs(), as it stood, could only safely be used on the entire Query, not on subtrees. Adjust its API to reflect that; which in passing allows it to have a fast path for the common case of no PHVs anywhere. Per report from Will Leinweber. Back-patch to v12 where the bug was introduced. Discussion: https://postgr.es/m/CALLb-4xJMd4GZt2YCecMC95H-PafuWNKcmps4HLRx2NHNBfB4g@mail.gmail.com
* Fix memory leak when initializing DH parameters in backendMichael Paquier2019-12-14
| | | | | | | | | | | | | When loading DH parameters used for the generation of ephemeral DH keys in the backend, the code has never bothered releasing the memory used for the DH information loaded from a file or from libpq's default. This commit makes sure that the information is properly free()'d. Note that as SSL parameters can be reloaded, this can cause an accumulation of memory leaked. As the leak is minor, no backpatch is done. Reported-by: Dmitry Uspenskiy Discussion: https://postgr.es/m/16160-18367e56e9a28264@postgresql.org
* Fix mdsyncfiletag(), take II.Thomas Munro2019-12-14
| | | | | | | | The previous commit failed to consider that FileGetRawDesc() might not return a valid fd, as discovered on the build farm. Switch to using the File interface only. Back-patch to 12, like the previous commit.
* Don't use _mdfd_getseg() in mdsyncfiletag().Thomas Munro2019-12-14
| | | | | | | | | | | | | | | | | | _mdfd_getseg() opens all segments up to the requested one. That causes problems for mdsyncfiletag(), if mdunlinkfork() has already unlinked other segment files. Open the file we want directly by name instead, if it's not already open. The consequence of this bug was a rare panic in the checkpointer, made more likely if you saturated the sync request queue so that the SYNC_FORGET_REQUEST messages for a given relation were more likely to be absorbed in separate cycles by the checkpointer. Back-patch to 12. Defect in commit 3eb77eba. Author: Thomas Munro Reported-by: Justin Pryzby Discussion: https://postgr.es/m/20191119115759.GI30362%40telsasoft.com
* Fix crash when a page was split during GiST index creation.Heikki Linnakangas2019-12-13
| | | | | | | | | | | | | | | | | | | | | The bug was similar to the one that was fixed in commit 22251686f0. When we split page X and insert the downlink for the new page, the parent page might also need to be split. When that happens, the downlink offset number we remembered for X is no longer valid. We correctly called gistFindCorrectParent() to re-find it, but gistFindCorrectParent() doesn't do anything if the LSN of the page hasn't changed, and we stopped updating LSNs during index build in commit 9155580fd5. The buggy codepath was taken if the page was split into three or more pages, and inserting the downlink caused the parent page to split. To fix, explicitly mark the downlink offset number as invalid, to force gistFindCorrectParent() to re-find it. Fixes bug #16134 reported by Alexander Lakhin, reported again as #16162 by Andreas Kunert. Thanks to Jeff Janes, Tom Lane and Tomas Vondra for debugging. Backpatch to v12, where we stopped WAL-logging during index build. Discussion: https://www.postgresql.org/message-id/16134-0423f729671dec64%40postgresql.org Discussion: https://www.postgresql.org/message-id/16162-45d21b7b6c1a3105%40postgresql.org
* Modernize our readline API a tad.Tom Lane2019-12-13
| | | | | | | | | | | | | | | | Prefer to call "rl_filename_completion_function" and "rl_completion_matches", rather than using the names without the rl_ prefix. This matches Readline's documentation, and makes our code a little clearer about which names are external. On platforms that only have the un-prefixed names (just some very ancient versions of libedit, AFAICT), reverse the direction of the compatibility macro definitions to match. Also, remove our extern declaration of "filename_completion_function"; whatever libedit versions may have failed to declare that are surely dead and buried. Discussion: https://postgr.es/m/23608.1576248145@sss.pgh.pa.us
* Put back regression test case in a more robust form.Tom Lane2019-12-12
| | | | | | | | | | | | | | This undoes my hurried commit 776a2c887, restoring the removed test case in a form that passes with or without force_parallel_mode = regress. It turns out that force_parallel_mode = regress simply fails to mask the Worker lines that will be produced by EXPLAIN (ANALYZE, VERBOSE). I'd say that's a bug in that feature, as its entire alleged reason for existence is to make the EXPLAIN output the same. It's certainly not a bug in the plan node pruning logic. Fortunately, this test case doesn't really need to use ANALYZE, so just drop that. Discussion: https://postgr.es/m/18891.1576109690@sss.pgh.pa.us
* Fix EXTRACT(ISOYEAR FROM timestamp) for years BC.Tom Lane2019-12-12
| | | | | | | | | The test cases added by commit 26ae3aa80 exposed an old oversight in timestamp[tz]_part: they didn't correct the result of date2isoyear() for BC years, so that we produced an off-by-one answer for such years. Fix that, and back-patch to all supported branches. Discussion: https://postgr.es/m/SG2PR06MB37762CAE45DB0F6CA7001EA9B6550@SG2PR06MB3776.apcprd06.prod.outlook.com
* Remove redundant function calls in timestamp[tz]_part().Tom Lane2019-12-12
| | | | | | | | | | | | | | | | | | | | The DTK_DOW/DTK_ISODOW and DTK_DOY switch cases in timestamp_part() and timestamptz_part() contained calls of timestamp2tm() that were fully redundant with the ones done just above the switch. This evidently crept in during commit 258ee1b63, which relocated that code from another place where the calls were indeed needed. Just delete the redundant calls. I (tgl) noted that our test coverage of these functions left quite a bit to be desired, so extend timestamp.sql and timestamptz.sql to cover all the branches. Back-patch to all supported branches, as the previous commit was. There's no real issue here other than some wasted cycles in some not-too-heavily-used code paths, but the test coverage seems valuable. Report and patch by Li Japin; test case adjustments by me. Discussion: https://postgr.es/m/SG2PR06MB37762CAE45DB0F6CA7001EA9B6550@SG2PR06MB3776.apcprd06.prod.outlook.com
* (Blindly) tweak new test regexAlvaro Herrera2019-12-12
| | | | | | | | | | | | | | gcc-based Windows buildfarm animals are not happy about a multiline regular expression I added recently. Try to accomodate; existing pg_basebackup tests suggest that \n should work instead of a bare newline, but throw in \r also. This being perl, TIMTOWTDI. Also remove the pointless $ at the end of the pattern, for extra luck. (If this doesn't work, I'll probably just split the regex in two.) Per buildfarm members jacana and fairywren. Discussion: https://postgr.es/m/3562.1576161217@sss.pgh.pa.us
* Remove extra parenthesis from comment.Etsuro Fujita2019-12-12
|
* Add readfuncs.c support for AppendRelInfo.Tom Lane2019-12-11
| | | | | | | This is made necessary by the fact that commit 6ef77cf46 added AppendRelInfos to plan trees. I'd concluded that this extra code was not necessary because we don't transmit that data to parallel workers ... but I forgot about -DWRITE_READ_PARSE_PLAN_TREES. Per buildfarm.
* Remove unstable test case added in commit 5935917ce.Tom Lane2019-12-11
| | | | | | | | | | | | | The buildfarm says this produces some unexpected output with force_parallel_mode = regress. There's probably a bug underneath that, but for the moment just delete the test case to make the buildfarm green again. (I now notice that the case had also failed to get updated to follow commit d52eaa094, which made plan_cache_mode = force_generic_plan prevail throughout partition_prune.sql; it was thereby managing to break a later test. When/if we put this back in, *don't* include the SET and RESET commands.)
* Allow executor startup pruning to prune all child nodes.Tom Lane2019-12-11
| | | | | | | | | | | | | Previously, if the startup pruning logic proved that all child nodes of an Append or MergeAppend could be pruned, we still kept one, just to keep EXPLAIN from failing. The previous commit removed the ruleutils.c limitation that required this kluge, so drop it. That results in less-confusing EXPLAIN output, as per a complaint from Yuzuko Hosoya. David Rowley Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp
* Further adjust EXPLAIN's choices of table alias names.Tom Lane2019-12-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch causes EXPLAIN to always assign a separate table alias to the parent RTE of an append relation (inheritance set); before, such RTEs were ignored if not actually scanned by the plan. Since the child RTEs now always have that same alias to start with (cf. commit 55a1954da), the net effect is that the parent RTE usually gets the alias used or implied by the query text, and the children all get that alias with "_N" appended. (The exception to "usually" is if there are duplicate aliases in different subtrees of the original query; then some of those original RTEs will also have "_N" appended.) This results in more uniform output for partitioned-table plans than we had before: the partitioned table itself gets the original alias, and all child tables have aliases with "_N", rather than the previous behavior where one of the children would get an alias without "_N". The reason for giving the parent RTE an alias, even if it isn't scanned by the plan, is that we now use the parent's alias to qualify Vars that refer to an appendrel output column and appear above the Append or MergeAppend that computes the appendrel. But below the append, Vars refer to some one of the child relations, and are displayed that way. This seems clearer than the old behavior where a Var that could carry values from any child relation was displayed as if it referred to only one of them. While at it, change ruleutils.c so that the code paths used by EXPLAIN deal in Plan trees not PlanState trees. This effectively reverts a decision made in commit 1cc29fe7c, which seemed like a good idea at the time to make ruleutils.c consistent with explain.c. However, it's problematic because we'd really like to allow executor startup pruning to remove all the children of an append node when possible, leaving no child PlanState to resolve Vars against. (That's not done here, but will be in the next patch.) This requires different handling of subplans and initplans than before, but is otherwise a pretty straightforward change. Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp
* Emit parameter values during query bind/execute errorsAlvaro Herrera2019-12-11
| | | | | | | | | This makes such log entries more useful, since the cause of the error can be dependent on the parameter values. Author: Alexey Bashtanov, Álvaro Herrera Discussion: https://postgr.es/m/0146a67b-a22a-0519-9082-bc29756b93a2@imap.cc Reviewed-by: Peter Eisentraut, Andres Freund, Tom Lane
* Use only one thread to handle incoming signals on Windows.Tom Lane2019-12-11
| | | | | | | | | | | | | | | | | | | | | | | Since its inception, our Windows signal emulation code has worked by running a main signal thread that just watches for incoming signal requests, and then spawns a new thread to handle each such request. That design is meant for servers in which requests can take substantial effort to process, and it's worth parallelizing the handling of requests. But those assumptions are just bogus for our signal code. It's not much more than pg_queue_signal(), which is cheap and can't parallelize at all, plus we don't really expect lots of signals to arrive at the same backend at once. More importantly, this approach creates failure modes that we could do without: either inability to spawn a new thread or inability to create a new pipe handle will risk loss of signals. Hence, dispense with the separate per-signal threads and just service each request in-line in the main signal thread. This should be a bit faster (for the normal case of one signal at a time) as well as more robust. Patch by me; thanks to Andrew Dunstan for testing and Amit Kapila for review. Discussion: https://postgr.es/m/4412.1575748586@sss.pgh.pa.us
* Remove ATPrepSetStatisticsPeter Eisentraut2019-12-11
| | | | | | | | | | | | | It was once possible to do ALTER TABLE ... SET STATISTICS on system tables without allow_sytem_table_mods. This was changed apparently by accident between PostgreSQL 9.1 and 9.2, but a code comment still claimed this was possible. Without that functionality, having a separate ATPrepSetStatistics() is useless, so use the generic ATSimplePermissions() instead and move the remaining custom code into ATExecSetStatistics(). Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/cc8d2648-a0ec-7a86-13e5-db473484e19e%402ndquadrant.com
* Fix output of Unicode normalization testPeter Eisentraut2019-12-11
| | | | | | | | Several off-by-more-than-one errors caused the output in case of a test failure to be truncated and unintelligible. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/6a7a8516-7d11-8fbd-0e8b-eadb4f0679eb%402ndquadrant.com
* Fix some compiler warnings with timestamp parsing in formatting.cMichael Paquier2019-12-11
| | | | | | | | | | | | gcc-7 used with a sufficient optimization level complains about warnings around do_to_timestamp() regarding the initialization and handling of some of its variables. Recent commits 66c74f8 and d589f94 made things made the interface more confusing, so document which variables are always expected and initialize properly the optional ones when they are set. Author: Andrey Lepikhov, Michael Paquier Discussion: https://postgr.es/m/a7e28b83-27b1-4e1c-c76b-4268c4b785bc@postgrespro.ru
* Fix tuple column count in pg_control_init().Tom Lane2019-12-10
| | | | | | | | Oversight in commit 2e4db241b. Nathan Bossart Discussion: https://postgr.es/m/1B616360-396A-4482-AA28-375566C86160@amazon.com
* Cosmetic cleaning of pg_config.h.win32Peter Eisentraut2019-12-10
| | | | | | | | Clean up some comments (some generated by old versions of autoconf) and some random ordering differences, so it's easier to diff this against the default pg_config.h or pg_config.h.in. Remove LOCALEDIR handling from pg_config.h.win32 altogether because it's already in pg_config_paths.h.
* Add backend-only appendStringInfoStringQuotedAlvaro Herrera2019-12-10
| | | | | | | | | | | | | | | | This provides a mechanism to emit literal values in informative messages, such as query parameters. The new code is more complex than what it replaces, primarily because it wants to be more efficient. It also has the (currently unused) additional optional capability of specifying a maximum size to print. The new function lives out of common/stringinfo.c so that frontend users of that file need not pull in unnecessary multibyte-encoding support code. Author: Álvaro Herrera and Alexey Bashtanov, after a suggestion from Andres Freund Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20190920203905.xkv5udsd5dxfs6tr@alap3.anarazel.de
* In pg_ctl, work around ERROR_SHARING_VIOLATION on the postmaster log file.Tom Lane2019-12-10
| | | | | | | | | | | | | | | | | | | | | | | | | On Windows, we use CMD.EXE to redirect the postmaster's stdout/stderr into a log file. CMD.EXE will open that file with non-sharing-friendly parameters, and the file will remain open for a short time after the postmaster has removed postmaster.pid. This can result in an ERROR_SHARING_VIOLATION failure if we attempt to start a new postmaster immediately with the same log file (e.g. during "pg_ctl restart"). This seems to explain intermittent buildfarm failures we've been seeing on Windows machines. To fix, just open and close the log file using our own pgwin32_open(), which will wait if necessary to avoid the failure. (Perhaps someday we should stop using CMD.EXE, but that would be a far more complex patch, and it doesn't seem worth the trouble ... yet.) Back-patch to v12. This only solves the problem when frontend fopen() is redirected to pgwin32_fopen(), which has only been true since commit 0ba06e0bf. Hence, no point in back-patching further, unless we care to back-patch that change too. Diagnosis and patch by Alexander Lakhin (bug #16154). Discussion: https://postgr.es/m/16154-1ccf0b537b24d5e0@postgresql.org
* Fix handling of multiple AFTER ROW triggers on a foreign table.Etsuro Fujita2019-12-10
| | | | | | | | | | | | | | | | | | | | | | AfterTriggerExecute() retrieves a fresh tuple or pair of tuples from a tuplestore and then stores the tuple(s) in the passed-in slot(s) if AFTER_TRIGGER_FDW_FETCH, while it uses the most-recently-retrieved tuple(s) stored in the slot(s) if AFTER_TRIGGER_FDW_REUSE. This was done correctly before 12, but commit ff11e7f4b broke it by mistakenly clearing the tuple(s) stored in the slot(s) in that function, leading to an assertion failure as reported in bug #16139 from Alexander Lakhin. Also, fix some other issues with the aforementioned commit in passing: * For tg_newslot, which is a slot added to the TriggerData struct by the commit to store new updated tuples, it didn't ensure the slot was NULL if there was no such tuple. * The commit failed to update the documentation about the trigger interface. Author: Etsuro Fujita Backpatch-through: 12 Discussion: https://postgr.es/m/16139-94f9ccf0db6119ec%40postgresql.org
* Fix race condition in our Windows signal emulation.Tom Lane2019-12-09
| | | | | | | | | | | | | | | | | | | | | | | | pg_signal_dispatch_thread() responded to the client (signal sender) and disconnected the pipe before actually setting the shared variables that make the signal visible to the backend process's main thread. In the worst case, it seems, effective delivery of the signal could be postponed for as long as the machine has any other work to do. To fix, just move the pg_queue_signal() call so that we do it before responding to the client. This essentially makes pgkill() synchronous, which is a stronger guarantee than we have on Unix. That may be overkill, but on the other hand we have not seen comparable timing bugs on any Unix platform. While at it, add some comments to this sadly underdocumented code. Problem diagnosis and fix by Amit Kapila; I just added the comments. Back-patch to all supported versions, as it appears that this can cause visible NOTIFY timing oddities on all of them, and there might be other misbehavior due to slow delivery of other signals. Discussion: https://postgr.es/m/32745.1575303812@sss.pgh.pa.us
* Improve isolationtester's timeout management.Tom Lane2019-12-09
| | | | | | | | | | | | | | | | | | | | | | | isolationtester.c had a hard-wired limit of 3 minutes per test step. It now emerges that this isn't quite enough for some of the slowest buildfarm animals. This isn't the first time we've had to raise this limit (cf. 1db439ad4), so let's make it configurable. This patch raises the default to 5 minutes, and introduces an environment variable PGISOLATIONTIMEOUT that can be set if more time is needed, following the precedent of PGCTLTIMEOUT. Also, modify isolationtester so that when the timeout is hit, it explicitly reports having sent a cancel. This makes the regression failure log considerably more intelligible. (In the worst case, a timed-out test might actually be reported as "passing" without this extra output, so arguably this is a bug fix in itself.) In passing, update the README file, which had apparently not gotten touched when we added "make check" support here. Back-patch to 9.6; older versions don't have comparable timeout logic. Discussion: https://postgr.es/m/22964.1575842935@sss.pgh.pa.us
* Fix typos in miscinit.c.Amit Kapila2019-12-09
| | | | | | | | | | | Commit f13ea95f9e moved the description of postmaster.pid file contents from miscadmin.h to pidfile.h, but missed to update the comments in miscinit.c. Author: Hadi Moshayedi Reviewed-by: Amit Kapila Backpatch-through: 10 Discussion: https://postgr.es/m/CAK=1=WpYEM9x3LGkaxgXaxeYQjnkdW8XLsxrYRTE2Gq-H83FMw@mail.gmail.com
* Remove PQsslpassword functionAndrew Dunstan2019-12-07
| | | | | | | This partially reverts commit 4dc6355210. The information returned by the function can be obtained by calling PQconninfo(), so the function is redundant.
* Improve test coverage of ruleutils.c.Tom Lane2019-12-06
| | | | | | | | | | | While fooling around with the EXPLAIN improvements I've been working on, I noticed that there were some large gaps in our test coverage of ruleutils.c, according to the code coverage report. This commit just adds a few test cases to improve coverage of: get_name_for_var_field() get_update_query_targetlist_def() isSimpleNode() get_sublink_expr()
* Fix comments in execGrouping.cJeff Davis2019-12-06
| | | | | | | | | Commit 5dfc1981 missed updating some comments. Also, fix a comment typo found in passing. Author: Jeff Davis Discussion: https://postgr.es/m/9723131d247b919f94699152647fa87ee0bc02c2.camel%40j-davis.com
* Disallow non-default collation in ADD PRIMARY KEY/UNIQUE USING INDEX.Tom Lane2019-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When creating a uniqueness constraint using a pre-existing index, we have always required that the index have the same properties you'd get if you just let a new index get built. However, when collations were added, we forgot to add the index's collation to that check. It's hard to trip over this without intentionally trying to break it: you'd have to explicitly specify a different collation in CREATE INDEX, then convert it to a pkey or unique constraint. Still, if you did that, pg_dump would emit a script that fails to reproduce the index's collation. The main practical problem is that after a pg_upgrade the index would be corrupt, because its actual physical order wouldn't match what pg_index says. A more theoretical issue, which is new as of v12, is that if you create the index with a nondeterministic collation then it wouldn't be enforcing the normal notion of uniqueness, causing the constraint to mean something different from a normally-created constraint. To fix, just add collation to the conditions checked for index acceptability in ADD PRIMARY KEY/UNIQUE USING INDEX. We won't try to clean up after anybody who's already created such a situation; it seems improbable enough to not be worth the effort involved. (If you do get into trouble, a REINDEX should be enough to fix it.) In principle this is a long-standing bug, but I chose not to back-patch --- the odds of causing trouble seem about as great as the odds of preventing it, and both risks are very low anyway. Per report from Alexey Bashtanov, though this is not his preferred fix. Discussion: https://postgr.es/m/b05ce36a-cefb-ca5e-b386-a400535b1c0b@imap.cc
* Fix handling of OpenSSL's SSL_clear_optionsMichael Paquier2019-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This function is supported down to OpenSSL 0.9.8, which is the oldest version supported since 593d4e4 (from Postgres 10 onwards), and is used since e3bdb2d (from 11 onwards). It is defined as a macro from OpenSSL 0.9.8 to 1.0.2, and as a function in 1.1.0 and newer versions. However, the configure check present is only adapted for functions. So, even if the code would be able to compile, configure fails to detect the macro, causing it to be ignored when compiling the code with OpenSSL from 0.9.8 to 1.0.2. The code needs a configure check as per a364dfa, which has fixed a compilation issue with a past version of LibreSSL in NetBSD 5.1. On HEAD, just remove the configure check as the last release of NetBSD 5 is from 2014 (and we have no more buildfarm members for it). In 11 and 12, improve the configure logic so as both macros and functions are correctly detected. This makes NetBSD 5 still work on already-released branches, but not for 13 onwards. The patch for HEAD is from me, and Daniel has written the version to use for the back-branches. Author: Michael Paquier, Daniel Gustaffson Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20191205083252.GE5064@paquier.xyz Discussion: https://postgr.es/m/98F7F99E-1129-41D8-B86B-FE3B1E286881@yesql.se Backpatch-through: 11
* Improve some comments in pg_upgrade.cMichael Paquier2019-12-06
| | | | | | | | | | | | When restoring database schemas on a new cluster, database "template1" is processed first, followed by all other databases in parallel, including "postgres". Both "postgres" and "template1" have some extra handling to propagate each one's properties, but comments were confusing regarding which one is processed where. Author: Julien Rouhaud Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/CAOBaU_a2iviTG7FE10yO_gcW+zQCHNFhRA_NDiktf3UR65BHdw@mail.gmail.com
* Remove configure check for OpenSSL's SSL_get_current_compression()Michael Paquier2019-12-06
| | | | | | | | | | | | | This function has been added in OpenSSL 0.9.8, which is the oldest version supported on HEAD, so checking for it at configure time is useless. Both the frontend and backend code did not even bother to use it. Reported-by: Daniel Gustafsson Author: Michael Paquier Reviewed-by: Daniel Gustafsson, Tom Lane Discussion: https://postgr.es/m/20191205083252.GE5064@paquier.xyz Discussion: https://postgr.es/m/98F7F99E-1129-41D8-B86B-FE3B1E286881@yesql.se
* pg_basebackup: Refactor code for reading COPY and tar data.Robert Haas2019-12-05
| | | | | | | | | | | | | | | | Add a new function ReceiveCopyData that does just that, taking a callback as an argument to specify what should be done with each chunk as it is received. This allows a single copy of the logic to be shared between ReceiveTarFile and ReceiveAndUnpackTarFile, and eliminates a few #ifdef conditions based on HAVE_LIBZ. While this is slightly more code, it's arguably clearer, and there is a pending patch that introduces additional calls to ReceiveCopyData. This commit is not intended to result in any functional change. Discussion: http://postgr.es/m/CA+TgmoYZDTHbSpwZtW=JDgAhwVAYvmdSrRUjOd+AYdfNNXVBDg@mail.gmail.com
* Minor comment improvements for instrumentation.hRobert Haas2019-12-05
| | | | | | | | Remove a duplicated word. Add "of" or "# of" in a couple places for clarity and consistency. Start comments with a lower case letter as we do elsewhere in this file. Rafia Sabih
* Blind attempt at fixing ecpg/compatlib's buildAlvaro Herrera2019-12-04
| | | | | | It now needs libpgcommon in order to get pnstrdup. Per buildfarm.
* Offer pnstrdup to frontend codeAlvaro Herrera2019-12-04
| | | | | | We already had it on the backend. Frontend can also use it now. Discussion: https://postgr.es/m/20191204144021.GA17976@alvherre.pgsql
* Update minimum SSL versionPeter Eisentraut2019-12-04
| | | | | | | | | | | | | | | | Change default of ssl_min_protocol_version to TLSv1.2 (from TLSv1, which means 1.0). Older versions are still supported, just not by default. TLS 1.0 is widely deprecated, and TLS 1.1 only slightly less so. All OpenSSL versions that support TLS 1.1 also support TLS 1.2, so there would be very little reason to, say, set the default to TLS 1.1 instead on grounds of better compatibility. The test suite overrides this new setting, so it can still run with older OpenSSL versions. Discussion: https://www.postgresql.org/message-id/flat/b327f8df-da98-054d-0cc5-b76a857cfed9%402ndquadrant.com
* Fix whitespace.Etsuro Fujita2019-12-04
|
* Use carriage returns for data insertion logs in pgbench on terminalMichael Paquier2019-12-04
| | | | | | | | | | | This is similar to what pg_basebackup and pg_rewind do when reporting cumulative data, and that's more user-friendly. Carriage returns are now used when stderr points to a terminal, and newlines are used in other cases, like a redirection to a log file. Author: Amit Langote Reviewed-by: Fabien Coelho Discussion: https://postgr.es/m/CA+HiwqFNwEjPeVaQsp2L7DyCPv1Eg1guwhrVhzMYqUJUk8ULKg@mail.gmail.com
* Remove unnecessary definition of CancelRequested in bin/scripts/Michael Paquier2019-12-04
| | | | | | | | | | This variable is now part of the refactored code for query cancellation in fe_utils. This fixes an oversight in commit a4fd3aa. While on it, improve some header includes in bin/scripts/. Author: Michael Paquier Reviewed-by: Fabien Coelho Discussion: https://postgr.es/m/20191203101625.GF1634@paquier.xyz
* Fix thinkos from commit 9989d37Michael Paquier2019-12-03
| | | | | | Error messages referring to incorrect WAL segment names could have been generated for a fsync() failure or when creating a new segment at the end of recovery.
* Fix alter_system_table testPeter Eisentraut2019-12-03
| | | | | | | | Add workaround for disabling ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS warning for the test that tries to create a tablespace with a reserved name. Discussion: https://www.postgresql.org/message-id/flat/E1iacW7-0003h6-6U%40gemulon.postgresql.org
* Remove XLogFileNameP() from the treeMichael Paquier2019-12-03
| | | | | | | | | | | | | | | | | | | | XLogFileNameP() is a wrapper routine able to build a palloc'd string for a WAL segment name, which is used for error string generation. There were several code paths where it gets called in a critical section, where memory allocation is not allowed. This results in triggering an assertion failure instead of generating the wanted error message. Another, more annoying, problem is that if the allocation to generate the WAL segment name fails on OOM, then the failure would be escalated to a PANIC. This removes the routine and all its callers are replaced with a logic using a fixed-size buffer. This way, all the existing mistakes are fixed and future ones are prevented. Author: Masahiko Sawada Reviewed-by: Michael Paquier, Álvaro Herrera Discussion: https://postgr.es/m/CA+fd4k5gC9H4uoWMLg9K_QfNrnkkdEw+-AFveob9YX7z8JnKTA@mail.gmail.com
* Fix failures with TAP tests of pg_ctl on WindowsMichael Paquier2019-12-03
| | | | | | | | | | | | | | | | | On Windows, all the hosts spawned by the TAP tests bind to 127.0.0.1. Hence, if there is a port conflict, starting a cluster would immediately fail. One of the test scripts of pg_ctl initializes a node without PostgresNode.pm, using the default port 5432. This could cause unexpected startup failures in the tests if an independent server was up and running on the same host (the reverse is also possible, though more unlikely). Fix this issue by assigning properly a free port to the node configured, in the same range used as for the other nodes part of the tests. Author: Michael Paquier Reviewed-by: Andrew Dunstan Discussion: https://postgr.es/m/20191202031444.GC1696@paquier.xyz Backpatch-through: 11