aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Use RAND_poll() for seeding randomness after fork().Noah Misch2020-07-25
| | | | | | | | | | | OpenSSL deprecated RAND_cleanup(), and OpenSSL 1.1.0 made it into a no-op. Replace it with RAND_poll(), per an OpenSSL community recommendation. While this has no user-visible consequences under OpenSSL defaults, it might help under non-default settings. Daniel Gustafsson, reviewed by David Steele and Michael Paquier. Discussion: https://postgr.es/m/9B038FA5-23E8-40D0-B932-D515E1D8F66A@yesql.se
* Improve performance of binary COPY FROM through better buffering.Tom Lane2020-07-25
| | | | | | | | | | | | | | | | | | At least on Linux and macOS, fread() turns out to have far higher per-call overhead than one could wish. Reading 64KB of data at a time and then parceling it out with our own memcpy logic makes binary COPY from a file significantly faster --- around 30% in simple testing for cases with narrow text columns (on Linux ... even more on macOS). In binary COPY from frontend, there's no per-call fread(), and this patch introduces an extra layer of memcpy'ing, but it still manages to eke out a small win. Apparently, the control-logic overhead in CopyGetData() is enough to be worth avoiding for small fetches. Bharath Rupireddy and Amit Langote, reviewed by Vignesh C, cosmetic tweaks by me Discussion: https://postgr.es/m/CALj2ACU5Bz06HWLwqSzNMN=Gupoj6Rcn_QVC+k070V4em9wu=A@mail.gmail.com
* Mark built-in coercion functions as leakproof where possible.Tom Lane2020-07-25
| | | | | | | | | | | | | | | | | | | | | | Making these leakproof seems helpful since (for example) if you have a function f(int8) that is leakproof, you don't want it to effectively become non-leakproof when you apply it to an int4 or int2 column. But that's what happens today, since the implicit up-coercion will not be leakproof. Most of the coercion functions that visibly can't throw errors are functions that convert numeric datatypes to other, wider ones. Notable is that float4_numeric and float8_numeric can be marked leakproof; before commit a57d312a7 they could not have been. I also marked the functions that coerce strings to "name" as leakproof; that's okay today because they truncate silently, but if we ever reconsidered that behavior then they could no longer be leakproof. I desisted from marking rtrim1() as leakproof; it appears so right now, but the code seems a little too complex and perhaps subject to change, since it's shared with other SQL functions. Discussion: https://postgr.es/m/459322.1595607431@sss.pgh.pa.us
* Fix buffer usage stats for nodes above Gather Merge.Amit Kapila2020-07-25
| | | | | | | | | | | | | | | Commit 85c9d347 addressed a similar problem for Gather and Gather Merge nodes but forgot to account for nodes above parallel nodes. This still works for nodes above Gather node because we shut down the workers for Gather node as soon as there are no more tuples. We can do a similar thing for Gather Merge as well but it seems better to account for stats during nodes shutdown after completing the execution. Reported-by: Stéphane Lorek, Jehan-Guillaume de Rorthais Author: Jehan-Guillaume de Rorthais <jgdr@dalibo.com> Reviewed-by: Amit Kapila Backpatch-through: 10, where it was introduced Discussion: https://postgr.es/m/20200718160206.584532a2@firost
* Replace TS_execute's TS_EXEC_CALC_NOT flag with TS_EXEC_SKIP_NOT.Tom Lane2020-07-24
| | | | | | | | | | | | | | | | | | | | | It's fairly silly that ignoring NOT subexpressions is TS_execute's default behavior. It's wrong on its face and it encourages errors of omission. Moreover, the only two remaining callers that aren't specifying CALC_NOT are in ts_headline calculations, and it's very arguable that those are bugs: if you've specified "!foo" in your query, why would you want to get a headline that includes "foo"? Hence, rip that out and change the default behavior to be to calculate NOT accurately. As a concession to the slim chance that there is still somebody somewhere who needs the incorrect behavior, provide a new SKIP_NOT flag to explicitly request that. Back-patch into v13, mainly because it seems better to change this at the same time as the previous commit's rejiggering of TS_execute related APIs. Any outside callers affected by this change are probably also affected by that one. Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
* Fix assorted bugs by changing TS_execute's callback API to ternary logic.Tom Lane2020-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | Text search sometimes failed to find valid matches, for instance '!crew:A'::tsquery might fail to locate 'crew:1B'::tsvector during an index search. The root of the issue is that TS_execute's callback functions were not changed to use ternary (yes/no/maybe) reporting when we made the search logic itself do so. It's somewhat annoying to break that API, but on the other hand we now see that any code using plain boolean logic is almost certainly broken since the addition of phrase search. There seem to be very few outside callers of this code anyway, so we'll just break them intentionally to get them to adapt. This allows removal of tsginidx.c's private re-implementation of TS_execute, since that's now entirely duplicative. It's also no longer necessary to avoid use of CALC_NOT in tsgistidx.c, since the underlying callbacks can now do something reasonable. Back-patch into v13. We can't change this in stable branches, but it seems not quite too late to fix it in v13. Tom Lane and Pavel Borisov Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
* Rename configure.in to configure.acPeter Eisentraut2020-07-24
| | | | | | | The new name has been preferred by Autoconf for a long time. Future versions of Autoconf will warn about the old name. Discussion: https://www.postgresql.org/message-id/flat/e796c185-5ece-8569-248f-dd3799701be1%402ndquadrant.com
* Fix error message.Thomas Munro2020-07-23
| | | | | | | Remove extra space. Back-patch to all releases, like commit 7897e3bb. Author: Lu, Chenyang <lucy.fnst@cn.fujitsu.com> Discussion: https://postgr.es/m/795d03c6129844d3803e7eea48f5af0d%40G08CNEXMBPEKD04.g08.fujitsu.local
* WAL Log invalidations at command end with wal_level=logical.Amit Kapila2020-07-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When wal_level=logical, write invalidations at command end into WAL so that decoding can use this information. This patch is required to allow the streaming of in-progress transactions in logical decoding.  The actual work to allow streaming will be committed as a separate patch. We still add the invalidations to the cache and write them to WAL at commit time in RecordTransactionCommit(). This uses the existing XLOG_INVALIDATIONS xlog record type, from the RM_STANDBY_ID resource manager (see LogStandbyInvalidations for details). So existing code relying on those invalidations (e.g. redo) does not need to be changed. The invalidations written at command end uses a new xlog record type XLOG_XACT_INVALIDATIONS, from RM_XACT_ID resource manager. See LogLogicalInvalidations for details. These new xlog records are ignored by existing redo procedures, which still rely on the invalidations written to commit records. The invalidations are decoded and accumulated in top-transaction, and then executed during replay.  This obviates the need to decode the invalidations as part of a commit record. Bump XLOG_PAGE_MAGIC, since this introduces XLOG_XACT_INVALIDATIONS. Author: Dilip Kumar, Tomas Vondra, Amit Kapila Reviewed-by: Amit Kapila Tested-by: Neha Sharma and Mahendra Singh Thalor Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
* Support infinity and -infinity in the numeric data type.Tom Lane2020-07-22
| | | | | | | | | | | | | | | Add infinities that behave the same as they do in the floating-point data types. Aside from any intrinsic usefulness these may have, this closes an important gap in our ability to convert floating values to numeric and/or replace float-based APIs with numeric. The new values are represented by bit patterns that were formerly not used (although old code probably would take them for NaNs). So there shouldn't be any pg_upgrade hazard. Patch by me, reviewed by Dean Rasheed and Andrew Gierth Discussion: https://postgr.es/m/606717.1591924582@sss.pgh.pa.us
* Fix conversion table generator scripts.Thomas Munro2020-07-22
| | | | | | | | | | | | | convutils.pm used implicit conversion of undefined value to integer zero. Some of conversion scripts are susceptible to regexp greediness. Fix, avoiding whitespace changes in the output. Also update ICU URLs that moved. No need to back-patch, because the output of these scripts is also in the source tree so we shouldn't need to rerun them on back-branches. Author: Kyotaro Horiguchi <horikyoga.ntt@gmail.com> Discussion: https://postgr.es/m/CA%2BhUKGJ7SEGLbj%3D%3DTQCcyKRA9aqj8%2B6L%3DexSq1y25TA%3DWxLziQ%40mail.gmail.com
* Fix comment in sha2.hMichael Paquier2020-07-22
| | | | | | | An incorrect reference to SHA-1 was present. Author: Daniel Gustafsson Discussion: https://postgr.es/m/FE26C953-FA87-4BB9-9105-AA1F8705B0D0@yesql.se
* neqjoinsel must now pass through collation to eqjoinsel.Tom Lane2020-07-21
| | | | | | | | | | | | | Since commit 044c99bc5, eqjoinsel passes the passed-in collation to any operators it invokes. However, neqjoinsel failed to pass on whatever collation it got, so that if we invoked a collation-dependent operator via that code path, we'd get "could not determine which collation to use for string comparison" or the like. Per report from Justin Pryzby. Back-patch to v12, like the previous commit. Discussion: https://postgr.es/m/20200721191606.GL5748@telsasoft.com
* Add nbtree Valgrind buffer lock checks.Peter Geoghegan2020-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | Holding just a buffer pin (with no buffer lock) on an nbtree buffer/page provides very weak guarantees, especially compared to heapam, where it's often safe to read a page while only holding a buffer pin. This commit has Valgrind enforce the following rule: it is never okay to access an nbtree buffer without holding both a pin and a lock on the buffer. A draft version of this patch detected questionable code that was cleaned up by commits fa7ff642 and 7154aa16. The code in question used to access an nbtree buffer page's special/opaque area with no buffer lock (only a buffer pin). This practice (which isn't obviously unsafe) is hereby formally disallowed in nbtree. There doesn't seem to be any reason to allow it, and banning it keeps things simple for Valgrind. The new checks are implemented by adding custom nbtree client requests (located in LockBuffer() wrapper functions); these requests are "superimposed" on top of the generic bufmgr.c Valgrind client requests added by commit 1e0dfd16. No custom resource management cleanup code is needed to undo the effects of marking buffers as non-accessible under this scheme. Author: Peter Geoghegan Reviewed-By: Anastasia Lubennikova, Georgios Kokolatos Discussion: https://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com
* Weaken type-OID-matching checks in array_recv and record_recv.Tom Lane2020-07-21
| | | | | | | | | | | | | | | | | | | | | Rather than always insisting on an exact match of the type OID in the data to the element type or column type we expect, complain only when both OIDs fall within the manually-assigned range. This acknowledges the reality that user-defined types don't have stable OIDs, while still preserving some of the mistake-detection value of the old test. (It's not entirely clear whether to error if one OID is manually assigned and the other isn't. But perhaps that case could arise in cross-version cases where a former extension type has been imported into core, so I let it pass.) This change allows us to remove the prohibition on binary transfer of user-defined arrays and composites in the recently-landed support for binary logical replication (commit 9de77b545). We can just unconditionally drop that check, since if the client has asked for binary transfer it must be >= v14 and must have this change. Discussion: https://postgr.es/m/CADK3HH+R3xMn=8t3Ct+uD+qJ1KD=Hbif5NFMJ+d5DkoCzp6Vgw@mail.gmail.com
* Be more careful about marking catalog columns NOT NULL by default.Tom Lane2020-07-21
| | | | | | | | | | | | | | | | | | | | | The bug fixed in commit 72eab84a5 would not have occurred if initdb had a less surprising rule about which columns should be marked NOT NULL by default. Let's make that rule be strictly that the column must be fixed-width and its predecessors must be fixed-width and NOT NULL, removing the hacky and unsafe exceptions for oidvector and int2vector. Since we do still want all existing oidvector and int2vector columns to be marked NOT NULL, we have to put BKI_FORCE_NOT_NULL labels on them. But making this less magic and more documented seems like a good idea, even if it's a shade more verbose. I didn't bump catversion since the initial catalog contents are not actually changed by this patch. Note however that the contents of postgres.bki do change, and feeding an old copy of that to a new backend will produce wrong results. Discussion: https://postgr.es/m/204760.1595181800@sss.pgh.pa.us
* Assert that we don't insert nulls into attnotnull catalog columns.Tom Lane2020-07-21
| | | | | | | | | | | | | | | | | | | | | The executor checks for this error, and so does the bootstrap catalog loader, but we never checked for it in retail catalog manipulations. The folly of that has now been exposed, so let's add assertions checking it. Checking in CatalogTupleInsert[WithInfo] and CatalogTupleUpdate[WithInfo] should be enough to cover this. Back-patch to v10; the aforesaid functions didn't exist before that, and it didn't seem worth adapting the patch to the oldest branches. But given the risk of JIT crashes, I think we certainly need this as far back as v11. Pre-v13, we have to explicitly exclude pg_subscription.subslotname and pg_subscription_rel.srsublsn from the checks, since they are mismarked. (Even if we change our mind about applying BKI_FORCE_NULL in the branch tips, it doesn't seem wise to have assertions that would fire in existing databases.) Discussion: https://postgr.es/m/298837.1595196283@sss.pgh.pa.us
* Rework tab completion of COPY and \copy in psqlMichael Paquier2020-07-21
| | | | | | | | | | | | | | This corrects and simplifies $subject in a number of ways: - Remove from the completion the pre-9.0 grammar still supported for compatibility purposes. This simplifies the code, and allows to extend it more easily with new patterns. - Add completion for the options of FORMAT within a WITH clause. - Complete WHERE and WITH clauses correctly depending on if TO or FROM are used, WHERE being only available with COPY FROM. Author: Vignesh C, Michael Paquier Reviewed-by: Ahsan Hadi Discussion: https://postgr.es/m/CALDaNm3zWr=OmxeNqOqfT=uZTSdam_j-gkX94CL8eTNfgUtf6A@mail.gmail.com
* Fix some corner cases for window ranges with infinite offsets.Tom Lane2020-07-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Many situations where the offset is infinity were not handled sanely. We should generally allow the val versus base +/- offset comparison to proceed according to the normal rules of IEEE arithmetic; however, we must do something special for the corner cases where base +/- offset would produce NaN due to subtracting two like-signed infinities. That corresponds to asking which values infinitely precede +inf or infinitely follow -inf, which should certainly be true of any finite value or of the opposite-signed infinity. After some discussion it seems that the best decision is to make it true of the same-signed infinity as well, ie, just return constant TRUE if the calculation would produce a NaN. (We could write this with a bit less code by subtracting anyway, and then checking for a NaN result. However, I prefer this formulation because it'll be easier to transpose into numeric.c.) Although this seems like clearly a bug fix with respect to finite values, it is less obviously correct for infinite values. Between that and the fact that the whole issue only arises for very strange window specifications (e.g. RANGE BETWEEN 'inf' PRECEDING AND 'inf' PRECEDING), I'll desist from back-patching. Noted by Dean Rasheed. Discussion: https://postgr.es/m/3393130.1594925893@sss.pgh.pa.us
* Make floating-point "NaN / 0" return NaN instead of raising an error.Tom Lane2020-07-20
| | | | | | | | | | This is more consistent with the IEEE 754 spec and our treatment of NaNs elsewhere; in particular, the case has always acted that way in "numeric" arithmetic. Noted by Dean Rasheed. Discussion: https://postgr.es/m/3421746.1594927785@sss.pgh.pa.us
* Assert that buffer is pinned in LockBuffer().Peter Geoghegan2020-07-20
| | | | | | | | | Strengthen the LockBuffer() assertion that verifies BufferIsValid() by making it verify BufferIsPinned() instead. Do the same in nearby related functions. There is probably not much chance that anybody will try to lock a buffer that is not already pinned, but we might as well make sure of that.
* Correctly mark pg_subscription_rel.srsublsn as nullable.Tom Lane2020-07-20
| | | | | | | | | | | | | | The code has always set this column to NULL when it's not valid, but the catalog header's description failed to reflect that, as did the SGML docs, as did some of the code. To prevent future coding errors of the same ilk, let's hide the field from C code as though it were variable-length (which, in a sense, it is). As with commit 72eab84a5, we can only fix this cleanly in HEAD and v13; the problem extends further back but we'll need some klugery in the released branches. Discussion: https://postgr.es/m/367660.1595202498@sss.pgh.pa.us
* Fix construction of updated-columns bitmap in logical replication.Tom Lane2020-07-20
| | | | | | | | | | | | | | | | | | | Commit b9c130a1f failed to apply the publisher-to-subscriber column mapping while checking which columns were updated. Perhaps less significantly, it didn't exclude dropped columns either. This could result in an incorrect updated-columns bitmap and thus wrong decisions about whether to fire column-specific triggers on the subscriber while applying updates. In HEAD (since commit 9de77b545), it could also result in accesses off the end of the colstatus array, as detected by buildfarm member skink. Fix the logic, and adjust 003_constraints.pl so that the problem is exposed in unpatched code. In HEAD, also add some assertions to check that we don't access off the ends of these newly variable-sized arrays. Back-patch to v10, as b9c130a1f was. Discussion: https://postgr.es/m/CAH2-Wz=79hKQ4++c5A060RYbjTHgiYTHz=fw6mptCtgghH2gJA@mail.gmail.com
* Rename wal_keep_segments to wal_keep_size.Fujii Masao2020-07-20
| | | | | | | | | | | | | | | | | | | | | | | | | max_slot_wal_keep_size that was added in v13 and wal_keep_segments are the GUC parameters to specify how much WAL files to retain for the standby servers. While max_slot_wal_keep_size accepts the number of bytes of WAL files, wal_keep_segments accepts the number of WAL files. This difference of setting units between those similar parameters could be confusing to users. To alleviate this situation, this commit renames wal_keep_segments to wal_keep_size, and make users specify the WAL size in it instead of the number of WAL files. There was also the idea to rename max_slot_wal_keep_size to max_slot_wal_keep_segments, in the discussion. But we have been moving away from measuring in segments, for example, checkpoint_segments was replaced by max_wal_size. So we concluded to rename wal_keep_segments to wal_keep_size. Back-patch to v13 where max_slot_wal_keep_size was added. Author: Fujii Masao Reviewed-by: Álvaro Herrera, Kyotaro Horiguchi, David Steele Discussion: https://postgr.es/m/574b4ea3-e0f9-b175-ead2-ebea7faea855@oss.nttdata.com
* Immediately WAL-log subtransaction and top-level XID association.Amit Kapila2020-07-20
| | | | | | | | | | | | | | | | | | | | The logical decoding infrastructure needs to know which top-level transaction the subxact belongs to, in order to decode all the changes. Until now that might be delayed until commit, due to the caching (GPROC_MAX_CACHED_SUBXIDS), preventing features requiring incremental decoding. So we also write the assignment info into WAL immediately, as part of the next WAL record (to minimize overhead) only when wal_level=logical. We can not remove the existing XLOG_XACT_ASSIGNMENT WAL as that is required for avoiding overflow in the hot standby snapshot. Bump XLOG_PAGE_MAGIC, since this introduces XLR_BLOCK_ID_TOPLEVEL_XID. Author: Tomas Vondra, Dilip Kumar, Amit Kapila Reviewed-by: Amit Kapila Tested-by: Neha Sharma and Mahendra Singh Thalor Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
* Add generic_plans and custom_plans fields into pg_prepared_statements.Fujii Masao2020-07-20
| | | | | | | | | | There was no easy way to find how many times generic and custom plans have been executed for a prepared statement. This commit exposes those numbers of times in pg_prepared_statements view. Author: Atsushi Torikoshi, Kyotaro Horiguchi Reviewed-by: Tatsuro Yamada, Masahiro Ikeda, Fujii Masao Discussion: https://postgr.es/m/CACZ0uYHZ4M=NZpofH6JuPHeX=__5xcDELF8hT8_2T+R55w4RQw@mail.gmail.com
* Fix minor typo in nodeIncrementalSort.c.Amit Kapila2020-07-20
| | | | | | | Author: Vignesh C Reviewed-by: James Coleman Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/CALDaNm0WjZqRvdeL59ZfYH0o4mLbKQ23jm-bnjXcFzgpANx55g@mail.gmail.com
* Avoid harmless Valgrind no-buffer-pin errors.Peter Geoghegan2020-07-19
| | | | | | | | | | | | | | | Valgrind builds with assertions enabled sometimes perform a theoretically unsafe page access inside an assertion in heapam_tuple_lock(). This happened when the eval-plan-qual isolation test ran one of the permutations added by commit a2418f9e238. Avoid complaints from Valgrind by moving the assertion ever so slightly. This is minor cleanup for commit 1e0dfd16, which added Valgrind buffer access instrumentation. No backpatch, since this only happens within an assertion, and seems very unlikely to cause any real problems even with assert-enabled builds.
* Mark buffers as defined to Valgrind consistently.Peter Geoghegan2020-07-19
| | | | | | | | | | | | | | Make PinBuffer() mark buffers as defined to Valgrind unconditionally, including when the buffer header spinlock must be acquired. Failure to handle that case could lead to false positive reports from Valgrind. This theoretically creates a risk that we'll mark buffers defined even when external callers don't end up with a buffer pin. That seems perfectly acceptable, though, since in general we make no guarantees about buffers that are unsafe to access being reliably marked as unsafe. Oversight in commit 1e0dfd16, which added valgrind buffer access instrumentation.
* Correctly mark pg_subscription.subslotname as nullable.Tom Lane2020-07-19
| | | | | | | | | | | | | | | | | | | | | | Due to the layout of this catalog, subslotname has to be explicitly marked BKI_FORCE_NULL, else initdb will default to the assumption that it's non-nullable. Since, in fact, CREATE/ALTER SUBSCRIPTION will store null values there, the existing marking is just wrong, and has been since this catalog was invented. We haven't noticed because not much in the system actually depends on attnotnull being truthful. However, JIT'ed tuple deconstruction does depend on that in some cases, allowing crashes or wrong answers in queries that inspect pg_subscription. Commit 9de77b545 quite accidentally exposed this on the buildfarm members that force JIT activation. Back-patch to v13. The problem goes further back, but we cannot force initdb in released branches, so some klugier solution will be needed there. Before working on that, push this simple fix to try to get the buildfarm back to green. Discussion: https://postgr.es/m/4118109.1595096139@sss.pgh.pa.us
* Define OPENSSL_API_COMPATPeter Eisentraut2020-07-19
| | | | | | | This avoids deprecation warnings from newer OpenSSL versions (3.0.0 in particular). Discussion: https://www.postgresql.org/message-id/flat/FEF81714-D479-4512-839B-C769D2605F8A%40yesql.se
* Fix replication/worker_internal.h to compile without other headers.Tom Lane2020-07-18
| | | | | | | | | This header hasn't changed recently, so the fact that it now fails headerscheck/cpluspluscheck testing must be due to changes in what it includes. Probably f21916791 is to blame, but I didn't try to verify that. Discussion: https://postgr.es/m/3699703.1595016554@sss.pgh.pa.us
* Allow logical replication to transfer data in binary format.Tom Lane2020-07-18
| | | | | | | | | | | | | | | | | This patch adds a "binary" option to CREATE/ALTER SUBSCRIPTION. When that's set, the publisher will send data using the data type's typsend function if any, rather than typoutput. This is generally faster, if slightly less robust. As committed, we won't try to transfer user-defined array or composite types in binary, for fear that type OIDs won't match at the subscriber. This might be changed later, but it seems like fit material for a follow-on patch. Dave Cramer, reviewed by Daniel Gustafsson, Petr Jelinek, and others; adjusted some by me Discussion: https://postgr.es/m/CADK3HH+R3xMn=8t3Ct+uD+qJ1KD=Hbif5NFMJ+d5DkoCzp6Vgw@mail.gmail.com
* Adjust minor comment in reorderbuffer.c.Amit Kapila2020-07-18
| | | | | | Author: Dave Cramer Reviewed-by: David G. Johnston Discussion: https://postgr.es/m/CADK3HHL8do4Fp1bsymgNasx375njV3AR7zY3UgYwzbL_Dx-n2Q@mail.gmail.com
* Fix comments in reorderbuffer.c.Amit Kapila2020-07-18
| | | | | | Author: Dave Cramer Reviewed-by: David G. Johnston Discussion: https://postgr.es/m/CADK3HHL8do4Fp1bsymgNasx375njV3AR7zY3UgYwzbL_Dx-n2Q@mail.gmail.com
* Rename "hash_mem" local variable.Peter Geoghegan2020-07-17
| | | | | | The term "hash_mem" will take on new significance when pending work to add a new hash_mem_multiplier GUC is committed. Rename a local variable that happens to have been called hash_mem now to avoid confusion.
* Add Valgrind buffer access instrumentation.Peter Geoghegan2020-07-17
| | | | | | | | | | | | | | | | | | | Teach Valgrind memcheck to maintain the "defined-ness" of each shared buffer based on whether the backend holds at least one pin at the point it is accessed by access method code. Bugs like the one fixed by commit b0229f26 can be detected using this new instrumentation. Note that backends running with Valgrind naturally have their own independent ideas about whether any given byte in shared memory is safe or unsafe to access. There is no risk that concurrent access by multiple backends to the same shared memory will confuse Valgrind's instrumentation, because everything already works at the process level (or at the memory mapping level, if you prefer). Author: Álvaro Herrera, Peter Geoghegan Reviewed-By: Anastasia Lubennikova Discussion: https://postgr.es/m/20150723195349.GW5596@postgresql.org Discussion: https://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com
* Cope with data-offset-less archive files during out-of-order restores.Tom Lane2020-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_dump produces custom-format archive files that lack data offsets when it is unable to seek its output. Up to now that's been a hazard for pg_restore. But if pg_restore is able to seek in the archive file, there is no reason to throw up our hands when asked to restore data blocks out of order. Instead, whenever we are searching for a data block, record the locations of the blocks we passed over (that is, fill in the missing data-offset fields in our in-memory copy of the TOC data). Then, when we hit a case that requires going backwards, we can just seek back. Also track the furthest point that we've searched to, and seek back to there when beginning a search for a new data block. This avoids possible O(N^2) time consumption, by ensuring that each data block is examined at most twice. (On Unix systems, that's at most twice per parallel-restore job; but since Windows uses threads here, the threads can share block location knowledge, reducing the amount of duplicated work.) We can also improve the code a bit by using fseeko() to skip over data blocks during the search. This is all of some use even in simple restores, but it's really significant for parallel pg_restore. In that case, we require seekability of the input already, and we will very probably need to do out-of-order restores. Back-patch to v12, as this fixes a regression introduced by commit 548e50976. Before that, parallel restore avoided requesting out-of-order restores, so it would work on a data-offset-less archive. Now it will again. Ideally this patch would include some test coverage, but there are other open bugs that need to be fixed before we can extend our coverage of parallel restore very much. Plan to revisit that later. David Gilman and Tom Lane; reviewed by Justin Pryzby Discussion: https://postgr.es/m/CALBH9DDuJ+scZc4MEvw5uO-=vRyR2=QF9+Yh=3hPEnKHWfS81A@mail.gmail.com
* Remove manual tracking of file position in pg_dump/pg_backup_custom.c.Tom Lane2020-07-17
| | | | | | | | | | | | | | | | | | | We do not really need to track the file position by hand. We were already relying on ftello() whenever the archive file is seekable, while if it's not seekable we don't need the file position info anyway because we're not going to be able to re-write the TOC. Moreover, that tracking was buggy since it failed to account for the effects of fseeko(). Somewhat remarkably, that seems not to have made for any live bugs up to now. We could fix the oversights, but it seems better to just get rid of the whole error-prone mess. In itself this is merely code cleanup. However, it's necessary infrastructure for an upcoming bug-fix patch (because that code *does* need valid file position after fseeko). The bug fix needs to go back as far as v12; hence, back-patch that far. Discussion: https://postgr.es/m/CALBH9DDuJ+scZc4MEvw5uO-=vRyR2=QF9+Yh=3hPEnKHWfS81A@mail.gmail.com
* Avoid CREATE INDEX unique index deduplication.Peter Geoghegan2020-07-17
| | | | | | | | | | | | | | | | | There is no advantage to attempting deduplication for a unique index during CREATE INDEX, since there cannot possibly be any duplicates. Doing so wastes cycles due to unnecessary copying. Make sure that we avoid it consistently. We already avoided unique index deduplication in the case where there were some spool2 tuples to merge. That didn't account for the fact that spool2 is removed early/unset in the common case where it has no tuples that need to be merged (i.e. it failed to account for the "spool2 turns out to be unnecessary" optimization in _bt_spools_heapscan()). Oversight in commit 0d861bbb, which added nbtree deduplication Backpatch: 13-, where nbtree deduplication was introduced.
* Ensure that distributed timezone abbreviation files are plain ASCII.Tom Lane2020-07-17
| | | | | | | | | | | | | | | | | | We had two occurrences of "Mitteleuropäische Zeit" in Europe.txt, though the corresponding entries in Default were spelled "Mitteleuropaeische Zeit". Standardize on the latter spelling to avoid questions of which encoding to use. While here, correct a couple of other trivial inconsistencies between the Default file and the supposedly-matching entries in the *.txt files, as exposed by some checking with comm(1). Also, add BDST to the Europe.txt file; it previously was only listed in Default. None of this has any direct functional effect. Per complaint from Christoph Berg. As usual for timezone data patches, apply to all branches. Discussion: https://postgr.es/m/20200716100743.GE3534683@msg.df7cb.de
* Fix whitespacePeter Eisentraut2020-07-17
|
* Resolve gratuitous tabs in SQL filePeter Eisentraut2020-07-17
|
* Fix signal handler setup for SIGHUP in the apply launcher process.Amit Kapila2020-07-17
| | | | | | | | | | | | Commit 1e53fe0e70 has unified the usage of the config-file reload flag by using the same signal handler function for the SIGHUP signal at many places in the code. By mistake, it used the wrong SIGNAL in apply launcher process for the SIGHUP signal handler function. Author: Bharath Rupireddy Reviewed-by: Dilip Kumar Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/CALj2ACVzHCRnS20bOiEHaLtP5PVBENZQn4khdsSJQgOv_GM-LA@mail.gmail.com
* Use MinimalTuple for tuple queues.Thomas Munro2020-07-17
| | | | | | | | | | | | This representation saves 8 bytes per tuple compared to HeapTuple, and avoids the need to allocate, copy and free on the receiving side. Gather can emit the returned MinimalTuple directly, but GatherMerge now needs to make an explicit copy because it buffers multiple tuples at a time. That should be no worse than before. Reviewed-by: Soumyadeep Chakraborty <soumyadeep2007@gmail.com> Discussion: https://postgr.es/m/CA%2BhUKG%2B8T_ggoUTAE-U%3DA%2BOcPc4%3DB0nPPHcSfffuQhvXXjML6w%40mail.gmail.com
* Add huge_page_size setting for use on Linux.Thomas Munro2020-07-17
| | | | | | | | This allows the huge page size to be set explicitly. The default is 0, meaning it will use the system default, as before. Author: Odin Ugedal <odin@ugedal.com> Discussion: https://postgr.es/m/20200608154639.20254-1-odin%40ugedal.com
* Enable almost all TAP tests involving symlinks on WindowsAndrew Dunstan2020-07-16
| | | | | | | | | | | | | | | | | | | | Windows has junction points which function as symbolic links for directories. This patch introduces a new function TestLib::dir_symlink() which creates a junction point on Windows and a standard Unix type symbolic link elsewhere. The function TestLib::perl2host is also modified, first to use cygpath where it's available (e.g. msys2) and second to allow it to succeed if the gandparent directory exists but the parent does not. Given these changes the only symlink tests that need to be skipped on Windows are those related to permissions or to use of readlink. The relevant tests for pg_basebackup and pg_rewind are therefore adjusted accordingly. Andrew Dunstan, reviewed by Peter Eisentraut and Michael Paquier. Discussion: https://postgr.es/m/c50a646c-d9bb-7c62-a4bf-8256ff6ff338@2ndquadrant.com
* Switch pg_test_fsync to use binary mode on WindowsMichael Paquier2020-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_test_fsync has always opened files using the text mode on Windows, as this is the default mode used if not enforced by _setmode(). This fixes a failure when running pg_test_fsync down to 12 because O_DSYNC and the text mode are not able to work together nicely. We fixed the handling of O_DSYNC in 12~ for the tool by switching to the concurrent-safe version of fopen() in src/port/ with 0ba06e0. And 40cfe86, by enforcing the text mode for compatibility reasons if O_TEXT or O_BINARY are not specified by the caller, broke pg_test_fsync. For all versions, this avoids any translation overhead, and pg_test_fsync should test binary writes, so it is a gain in all cases. Note that O_DSYNC is still not handled correctly in ~11, leading to pg_test_fsync to show insanely high numbers for open_datasync() (using this property it is easy to notice that the binary mode is much faster). This would require a backpatch of 0ba06e0 and 40cfe86, which could potentially break existing applications, so this is left out. There are no TAP tests for this tool yet, so I have checked all builds manually using MSVC. We could invent a new option to run a single transaction instead of using a duration of 1s to make the tests a maximum short, but this is left as future work. Thanks to Bruce Momjian for the discussion. Reported-by: Jeff Janes Author: Michael Paquier Discussion: https://postgr.es/m/16526-279ded30a230d275@postgresql.org Backpatch-through: 9.5
* pg_dump: Reorganize dumpFunc() and dumpAgg()Peter Eisentraut2020-07-15
| | | | | | | | | | Similar to daa9fe8a5264a3f192efa5ddee8fb011ad9da365, instead of repeating the almost same large query in each version branch, use one query and add a few columns to the SELECT list depending on the version. This saves a lot of duplication. Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr> Discussion: https://www.postgresql.org/message-id/flat/6594334b-40fd-14f1-6bc5-877afa3feed5@2ndquadrant.com
* Fix handling of missing files when using pg_rewind with online sourceMichael Paquier2020-07-15
| | | | | | | | | | | | | | | | | | | | | | | | When working with an online source cluster, pg_rewind gets a list of all the files in the source data directory using a WITH RECURSIVE query, returning a NULL result for a file's metadata if it gets removed between the moment it is listed in a directory and the moment its metadata is obtained with pg_stat_file() (say a recycled WAL segment). The query result was processed in such a way that for each tuple we checked only that the first file's metadata was NULL. This could have two consequences, both resulting in a failure of the rewind: - If the first tuple referred to a removed file, all files from the source would be ignored. - Any file actually missing would not be considered as such. While on it, rework slightly the code so as no values are saved if we know that a file is going to be skipped. Issue introduced by b36805f, so backpatch down to 9.5. Author: Justin Pryzby, Michael Paquier Reviewed-by: Daniel Gustafsson, Masahiko Sawada Discussion: https://postgr.es/m/20200713061010.GC23581@telsasoft.com Backpatch-through: 9.5