aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Add parallel-aware hash joins.Andres Freund2017-12-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel Hash Join with Parallel Hash. While hash joins could already appear in parallel queries, they were previously always parallel-oblivious and had a partial subplan only on the outer side, meaning that the work of the inner subplan was duplicated in every worker. After this commit, the planner will consider using a partial subplan on the inner side too, using the Parallel Hash node to divide the work over the available CPU cores and combine its results in shared memory. If the join needs to be split into multiple batches in order to respect work_mem, then workers process different batches as much as possible and then work together on the remaining batches. The advantages of a parallel-aware hash join over a parallel-oblivious hash join used in a parallel query are that it: * avoids wasting memory on duplicated hash tables * avoids wasting disk space on duplicated batch files * divides the work of building the hash table over the CPUs One disadvantage is that there is some communication between the participating CPUs which might outweigh the benefits of parallelism in the case of small hash tables. This is avoided by the planner's existing reluctance to supply partial plans for small scans, but it may be necessary to estimate synchronization costs in future if that situation changes. Another is that outer batch 0 must be written to disk if multiple batches are required. A potential future advantage of parallel-aware hash joins is that right and full outer joins could be supported, since there is a single set of matched bits for each hashtable, but that is not yet implemented. A new GUC enable_parallel_hash is defined to control the feature, defaulting to on. Author: Thomas Munro Reviewed-By: Andres Freund, Robert Haas Tested-By: Rafia Sabih, Prabhat Sahu Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
* When passing query strings to workers, pass the terminating \0.Robert Haas2017-12-20
| | | | | | | | | Otherwise, when the query string is read, we might trailing garbage beyond the end, unless there happens to be a \0 there by good luck. Report and patch by Thomas Munro. Reviewed by Rafia Sabih. Discussion: http://postgr.es/m/CAEepm=2SJs7X+_vx8QoDu8d1SMEOxtLhxxLNzZun_BvNkuNhrw@mail.gmail.com
* Test instrumentation of Hash nodes with parallel query.Robert Haas2017-12-19
| | | | | | | | | | Commit 8526bcb2df76d5171b4f4d6dc7a97560a73a5eff fixed bugs related to both Sort and Hash, but only added a test case for Sort. This adds a test case for Hash to match. Thomas Munro Discussion: http://postgr.es/m/CAEepm=2-LRnfwUBZDqQt+XAcd0af_ykNyyVvP3h1uB1AQ=e-eA@mail.gmail.com
* Try again to fix accumulation of parallel worker instrumentation.Robert Haas2017-12-19
| | | | | | | | | | | | | | | | | When a Gather or Gather Merge node is started and stopped multiple times, accumulate instrumentation data only once, at the end, instead of after each execution, to avoid recording inflated totals. Commit 778e78ae9fa51e58f41cbdc72b293291d02d8984, the previous attempt at a fix, instead reset the state after every execution, which worked for the general instrumentation data but had problems for the additional instrumentation specific to Sort and Hash nodes. Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila, following a design proposal from Thomas Munro, with a comment tweak by me. Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
* Re-fix wrong costing of Sort under Gather Merge.Robert Haas2017-12-19
| | | | | | | | | | | | | | | | Commit dc02c7bca4dccf7de278cdc6b3325a829e75b252 changed this call to create_sort_path() to take -1 rather than limit_tuples because, at that time, there was no way for a Sort beneath a Gather Merge to become a top-N sort. Later, commit 3452dc5240da43e833118484e1e9b4894d04431c provided a way for a Sort beneath a Gather Merge to become a top-N sort, but failed to revert the previous commit in the process. Do that. Report and analysis by Jeff Janes; patch by Thomas Munro; review by Amit Kapila and by me. Discussion: http://postgr.es/m/CAEepm=1BWtC34vUroA0Uqjw02MaqdUrW+d6WD85_k8SLyPiKHQ@mail.gmail.com
* Mark a few parallelism-related variables with PGDLLIMPORT.Robert Haas2017-12-19
| | | | Discussion: http://postgr.es/m/CAEepm=2HzxAOKU6eCWTyvMwBy=fhGvbwDPM_fVps759tkyQSYQ@mail.gmail.com
* Add libpq connection parameter "scram_channel_binding"Peter Eisentraut2017-12-19
| | | | | | | | | | | | | | | | | This parameter can be used to enforce the channel binding type used during a SCRAM authentication. This can be useful to check code paths where an invalid channel binding type is used by a client and will be even more useful to allow testing other channel binding types when they are added. The default value is tls-unique, which is what RFC 5802 specifies. Clients can optionally specify an empty value, which has as effect to not use channel binding and use SCRAM-SHA-256 as chosen SASL mechanism. More tests for SCRAM and channel binding are added to the SSL test suite. Author: Author: Michael Paquier <michael.paquier@gmail.com>
* Add shared tuplestores.Andres Freund2017-12-18
| | | | | | | | | | | | | | | | SharedTuplestore allows multiple participants to write into it and then read the tuples back from it in parallel. Each reader receives partial results. For now it always uses disk files, but other buffering policies and other kinds of scans (ie each reader receives complete results) may be useful in future. The upcoming parallel hash join feature will use this facility. Author: Thomas Munro Reviewed-By: Peter Geoghegan, Andres Freund, Robert Haas Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
* Move SCRAM-related name definitions to scram-common.hPeter Eisentraut2017-12-18
| | | | | | | | | | Mechanism names for SCRAM and channel binding names have been included in scram.h by the libpq frontend code, and this header references a set of routines which are only used by the backend. scram-common.h is on the contrary usable by both the backend and libpq, so getting those names from there seems more reasonable. Author: Michael Paquier <michael.paquier@gmail.com>
* Fix bug in cancellation of non-exclusive backup to avoid assertion failure.Fujii Masao2017-12-19
| | | | | | | | | | | | | | | | | | | | | | | | Previously an assertion failure occurred when pg_stop_backup() for non-exclusive backup was aborted while it's waiting for WAL files to be archived. This assertion failure happened in do_pg_abort_backup() which was called when a non-exclusive backup was canceled. do_pg_abort_backup() assumes that there is at least one non-exclusive backup running when it's called. But pg_stop_backup() can be canceled even after it marks the end of non-exclusive backup (e.g., during waiting for WAL archiving). This broke the assumption that do_pg_abort_backup() relies on, and which caused an assertion failure. This commit changes do_pg_abort_backup() so that it does nothing when non-exclusive backup has been already marked as completed. That is, the asssumption is also changed, and do_pg_abort_backup() now can handle even the case where it's called when there is no running backup. Backpatch to 9.6 where SQL-callable non-exclusive backup was added. Author: Masahiko Sawada and Michael Paquier Reviewed-By: Robert Haas and Fujii Masao Discussion: https://www.postgresql.org/message-id/CAD21AoD2L1Fu2c==gnVASMyFAAaq3y-AQ2uEVj-zTCGFFjvmDg@mail.gmail.com
* Fix crashes on plans with multiple Gather (Merge) nodes.Robert Haas2017-12-18
| | | | | | | | | | | | | | | | es_query_dsa turns out to be broken by design, because it supposes that there is only one DSA for the whole query, whereas there is actually one per Gather (Merge) node. For now, work around that problem by setting and clearing the pointer around the sections of code that might need it. It's probably a better idea to get rid of es_query_dsa altogether in favor of having each node keep track individually of which DSA is relevant, but that seems like more than we would want to back-patch. Thomas Munro, reviewed and tested by Andreas Seltenreich, Amit Kapila, and by me. Discussion: http://postgr.es/m/CAEepm=1U6as=brnVvMNixEV2tpi8NuyQoTmO8Qef0-VV+=7MDA@mail.gmail.com
* Fix typo on commentMagnus Hagander2017-12-18
| | | | Author: David Rowley
* Suppress compiler warning about no function return value.Tom Lane2017-12-17
| | | | | | | Compilers that don't know that ereport(ERROR) doesn't return complained about the new coding in scanint8() introduced by commit 101c7ee3e. Tweak coding to avoid the warning. Per buildfarm.
* Avoid and detect SIGPIPE race in TAP tests.Noah Misch2017-12-16
| | | | | | | | | | | Don't write to stdin of a psql process that could have already exited with an authentication failure. Buildfarm members crake and mandrill have failed once by doing so. Ignore SIGPIPE in all TAP tests. Back-patch to v10, where these tests were introduced. Reviewed by Michael Paquier. Discussion: https://postgr.es/m/20171209210203.GC3362632@rfd.leadboat.com
* Fix oversights in new plpgsql test suite infrastructure.Tom Lane2017-12-16
| | | | | | Fix a couple of minor oversights in commit 632b03da3: the tests should be run in database "pl_regression" like the other PLs do, and we should clean up the tests' output cruft during "make clean".
* Perform a lot more sanity checks when freezing tuples.Andres Freund2017-12-14
| | | | | | | | | | | | | | | | The previous commit has shown that the sanity checks around freezing aren't strong enough. Strengthening them seems especially important because the existance of the bug has caused corruption that we don't want to make even worse during future vacuum cycles. The errors are emitted with ereport rather than elog, despite being "should never happen" messages, so a proper error code is emitted. To avoid superflous translations, mark messages as internal. Author: Andres Freund and Alvaro Herrera Reviewed-By: Alvaro Herrera, Michael Paquier Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de Backpatch: 9.3-
* Fix pruning of locked and updated tuples.Andres Freund2017-12-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously it was possible that a tuple was not pruned during vacuum, even though its update xmax (i.e. the updating xid in a multixact with both key share lockers and an updater) was below the cutoff horizon. As the freezing code assumed, rightly so, that that's not supposed to happen, xmax would be preserved (as a member of a new multixact or xmax directly). That causes two problems: For one the tuple is below the xmin horizon, which can cause problems if the clog is truncated or once there's an xid wraparound. The bigger problem is that that will break HOT chains, which in turn can lead two to breakages: First, failing index lookups, which in turn can e.g lead to constraints being violated. Second, future hot prunes / vacuums can end up making invisible tuples visible again. There's other harmful scenarios. Fix the problem by recognizing that tuples can be DEAD instead of RECENTLY_DEAD, even if the multixactid has alive members, if the update_xid is below the xmin horizon. That's safe because newer versions of the tuple will contain the locking xids. A followup commit will harden the code somewhat against future similar bugs and already corrupted data. Author: Andres Freund, with changes by Alvaro Herrera Reported-By: Daniel Wood Analyzed-By: Andres Freund, Alvaro Herrera, Robert Haas, Peter Geoghegan, Daniel Wood, Yi Wen Wong, Michael Paquier Reviewed-By: Alvaro Herrera, Robert Haas, Michael Paquier Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de Backpatch: 9.3-
* Fix a number of copy & paste comment errors in common/int.h.Andres Freund2017-12-14
| | | | | Author: Christoph Berg Discussion: https://postgr.es/m/20171214082808.GA5775@msg.df7cb.de
* Fix walsender timeouts when decoding a large transactionAndrew Dunstan2017-12-14
| | | | | | | | | | | | | | | | | | | | | | | The logical slots have a fast code path for sending data so as not to impose too high a per message overhead. The fast path skips checks for interrupts and timeouts. However, the existing coding failed to consider the fact that a transaction with a large number of changes may take a very long time to be processed and sent to the client. This causes the walsender to ignore interrupts for potentially a long time and more importantly it will result in the walsender being killed due to timeout at the end of such a transaction. This commit changes the fast path to also check for interrupts and only allows calling the fast path when the last keepalive check happened less than half the walsender timeout ago. Otherwise the slower code path will be taken. Backpatched to 9.4 Petr Jelinek, reviewed by Kyotaro HORIGUCHI, Yura Sokolov, Craig Ringer and Robert Haas. Discussion: https://postgr.es/m/e082a56a-fd95-a250-3bae-0fff93832510@2ndquadrant.com
* Add approximated Zipfian-distributed random generator to pgbench.Teodor Sigaev2017-12-14
| | | | | | | | Generator helps to make close to real-world tests. Author: Alik Khilazhev Reviewed-By: Fabien COELHO Discussion: https://www.postgresql.org/message-id/flat/BF3B6F54-68C3-417A-BFAB-FB4D66F2B410@postgrespro.ru
* Allow executor nodes to change their ExecProcNode function.Andres Freund2017-12-13
| | | | | | | | | | | | | | In order for executor nodes to be able to change their ExecProcNode function after ExecInitNode() has finished, provide ExecSetExecProcNode(). This allows any wrappers functions that only execProcnode.c knows about to be reinstalled. The motivation for wanting to change ExecProcNode after ExecInitNode() has finished is that it is not known until later whether parallel query is available, so if a parallel variant is to be installed then ExecInitNode() is too soon to decide. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=09rr65VN+cAV5FgyM_z=D77Xy8Fuc9CDDDYbq3pQUezg@mail.gmail.com
* Add pg_attribute_always_inline.Andres Freund2017-12-13
| | | | | | | | | | | Sometimes it is useful to be able to insist that the compiler inline a function that its normal cost analysis would not normally choose to inline. This can be useful for instantiating different variants of a function that remove branches of code by constant folding. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=09rr65VN+cAV5FgyM_z=D77Xy8Fuc9CDDDYbq3pQUezg@mail.gmail.com
* Add defenses against pre-crash files to BufFileOpenShared().Andres Freund2017-12-13
| | | | | | | | | | | | | | | | | Crash restarts currently don't clean up temporary files, as a debugging aid. If a left-over file happens to have the same name as a segment file we're trying to create, we'll just truncate and reuse it, but there is a problem: BufFileOpenShared() determines how many segment files exist by trying to open .0, .1, .2, ... until it finds no more files. It might be confused by a junk file that has the next segment number. To defend against that, make sure we always create a gap after the end file by unlinking the following name if it exists. Also make it an error to try to open a BufFile that doesn't exist (has no segment 0), so as not to encourage the development of client code that depends on an interface that we can't reliably provide. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm%3D2jhCbC_GFQJaaDhWxLB4EXtT3vVd5czuRNaqF5CWSTog%40mail.gmail.com
* Fix parallel index scan hang with deleted or half-dead pages.Robert Haas2017-12-13
| | | | | | | | | | The previous coding forgot to release the scan before seizing it again, leading to a lockup. Report by Patrick Hemmer. Diagnosis by Thomas Munro. Patch by Amit Kapila. Discussion: http://postgr.es/m/CAEepm=2xZUcOGP9V0O_G0=2P2wwXwPrkF=upWTCJSisUxMnuSg@mail.gmail.com
* Revert "Fix accumulation of parallel worker instrumentation."Robert Haas2017-12-13
| | | | | | | This reverts commit 2c09a5c12a66087218c7f8cba269cd3de51b9b82. Per further discussion, that doesn't seem to be the best possible fix. Discussion: http://postgr.es/m/CAA4eK1LW2aFKzY3=vwvc=t-juzPPVWP2uT1bpx_MeyEqnM+p8g@mail.gmail.com
* Rethink MemoryContext creation to improve performance.Tom Lane2017-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes a number of interrelated changes to reduce the overhead involved in creating/deleting memory contexts. The key ideas are: * Include the AllocSetContext header of an aset.c context in its first malloc request, rather than allocating it separately in TopMemoryContext. This means that we now always create an initial or "keeper" block in an aset, even if it never receives any allocation requests. * Create freelists in which we can save and recycle recently-destroyed asets (this idea is due to Robert Haas). * In the common case where the name of a context is a constant string, just store a pointer to it in the context header, rather than copying the string. The first change eliminates a palloc/pfree cycle per context, and also avoids bloat in TopMemoryContext, at the price that creating a context now involves a malloc/free cycle even if the context never receives any allocations. That would be a loser for some common usage patterns, but recycling short-lived contexts via the freelist eliminates that pain. Avoiding copying constant strings not only saves strlen() and strcpy() overhead, but is an essential part of the freelist optimization because it makes the context header size constant. Currently we make no attempt to use the freelist for contexts with non-constant names. (Perhaps someday we'll need to think harder about that, but in current usage, most contexts with custom names are long-lived anyway.) The freelist management in this initial commit is pretty simplistic, and we might want to refine it later --- but in common workloads that will never matter because the freelists will never get full anyway. To create a context with a non-constant name, one is now required to call AllocSetContextCreateExtended and specify the MEMCONTEXT_COPY_NAME option. AllocSetContextCreate becomes a wrapper macro, and it includes a test that will complain about non-string-literal context name parameters on gcc and similar compilers. An unfortunate side effect of making AllocSetContextCreate a macro is that one is now *required* to use the size parameter abstraction macros (ALLOCSET_DEFAULT_SIZES and friends) with it; the pre-9.6 habit of writing out individual size parameters no longer works unless you switch to AllocSetContextCreateExtended. Internally to the memory-context-related modules, the context creation APIs are simplified, removing the rather baroque original design whereby a context-type module called mcxt.c which then called back into the context-type module. That saved a bit of code duplication, but not much, and it prevented context-type modules from exercising control over the allocation of context headers. In passing, I converted the test-and-elog validation of aset size parameters into Asserts to save a few more cycles. The original thought was that callers might compute size parameters on the fly, but in practice nobody does that, so it's useless to expend cycles on checking those numbers in production builds. Also, mark the memory context method-pointer structs "const", just for cleanliness. Discussion: https://postgr.es/m/2264.1512870796@sss.pgh.pa.us
* Start a separate test suite for plpgsqlPeter Eisentraut2017-12-13
| | | | | | | | | | | | | The plpgsql.sql test file in the main regression tests is now by far the largest after numeric_big, making editing and managing the test cases very cumbersome. The other PLs have their own test suites split up into smaller files by topic. It would be nice to have that for plpgsql as well. So, to get that started, set up test infrastructure in src/pl/plpgsql/src/ and split out the recently added procedure test cases into a new file there. That file now mirrors the test cases added to the other PLs, making managing those matching tests a bit easier too. msvc build system changes with help from Michael Paquier
* Fix crash when using CALL on an aggregatePeter Eisentraut2017-12-13
| | | | | Author: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> Reported-by: Rushabh Lathia <rushabh.lathia@gmail.com>
* Add float.h include to int8.c, for isnan().Andres Freund2017-12-12
| | | | | | | | port.h redirects isnan() to _isnan() on windows, which in turn is provided by float.h rather than math.h. Therefore include the latter as well. Per buildfarm.
* Consistently use PG_INT(16|32|64)_(MIN|MAX).Andres Freund2017-12-12
| | | | Per buildfarm animal woodlouse.
* PL/Python: Fix potential NULL pointer dereferencePeter Eisentraut2017-12-12
| | | | | | | | | | | | | | After d0aa965c0a0ac2ff7906ae1b1dad50a7952efa56, one error path in PLy_spi_execute_fetch_result() could result in the variable "result" being dereferenced after being set to NULL. Rearrange the code a bit to fix that. Also add another SPI_freetuptable() call so that that is cleared in all error paths. discovered by John Naylor <jcnaylor@gmail.com> via scan-build ideas and review by Tom Lane
* Use new overflow aware integer operations.Andres Freund2017-12-12
| | | | | | | | | | | | | | | | | A previous commit added inline functions that provide fast(er) and correct overflow checks for signed integer math. Use them in a significant portion of backend code. There's more to touch in both backend and frontend code, but these were the easily identifiable cases. The old overflow checks are noticeable in integer heavy workloads. A secondary benefit is that getting rid of overflow checks that rely on signed integer overflow wrapping around, will allow us to get rid of -fwrapv in the future. Which in turn slows down other code. Author: Andres Freund Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
* Provide overflow safe integer math inline functions.Andres Freund2017-12-12
| | | | | | | | | | | | | | It's not easy to get signed integer overflow checks correct and fast. Therefore abstract the necessary infrastructure into a common header providing addition, subtraction and multiplication for 16, 32, 64 bit signed integers. The new macros aren't yet used, but a followup commit will convert several open coded overflow checks. Author: Andres Freund, with some code stolen from Greg Stark Reviewed-By: Robert Haas Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
* Remove obsolete comment.Robert Haas2017-12-12
| | | | | | | | | | | | Commit 8b304b8b72b0a60f1968d39f01cf817c8df863ec removed replacement selection, but left behind this comment text. The optimization to which the comment refers is not relevant without replacement selection, because if we had so few tuples as to require only one tape, we would have just completed the sort in memory. Peter Geoghegan Discussion: http://postgr.es/m/CAH2-WznqupLA8CMjp+vqzoe0yXu0DYYbQSNZxmgN76tLnAOZ_w@mail.gmail.com
* Remove bug from OPTIMIZER_DEBUG code for partition-wise join.Robert Haas2017-12-12
| | | | | | Etsuro Fujita, reviewed by Ashutosh Bapat Discussion: http://postgr.es/m/5A2A60E6.6000008@lab.ntt.co.jp
* Fix commentPeter Eisentraut2017-12-11
| | | | Reported-by: Noah Misch <noah@leadboat.com>
* Fix corner-case coredump in _SPI_error_callback().Tom Lane2017-12-11
| | | | | | | | | | | I noticed that _SPI_execute_plan initially sets spierrcontext.arg = NULL, and only fills it in some time later. If an error were to happen in between, _SPI_error_callback would try to dereference the null pointer. This is unlikely --- there's not much between those points except push-snapshot calls --- but it's clearly not impossible. Tweak the callback to do nothing if the pointer isn't set yet. It's been like this for awhile, so back-patch to all supported branches.
* Improve comment about PartitionBoundInfoData.Robert Haas2017-12-11
| | | | | | | Ashutosh Bapat, per discussion with Julien Rouhaund, who also reviewed this patch. Discussion: http://postgr.es/m/CAFjFpReBR3ftK9C23LLCZY_TDXhhjB_dgE-L9+mfTnA=gkvdvQ@mail.gmail.com
* Stabilize output of new regression test case.Tom Lane2017-12-10
| | | | | | | | | | | | The test added by commit 390d58135 turns out to have different output in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the error message as a result of detecting the error at a different place. Possibly we should do something to make that more consistent. But as a stopgap measure to make the buildfarm green again, adjust the test to suppress CONTEXT entirely. We can revert this if we do something in the backend to eliminate the inconsistency. Discussion: https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
* Fix plpgsql to reinitialize record variables at block re-entry.Tom Lane2017-12-09
| | | | | | | | | | | | | | | | | | | | | | | If one exits and re-enters a DECLARE ... BEGIN ... END block within a single execution of a plpgsql function, perhaps due to a surrounding loop, the declared variables are supposed to get re-initialized to null (or whatever their initializer is). But this failed to happen for variables of type "record", because while exec_stmt_block() expected such variables to be included in the block's initvarnos list, plpgsql_add_initdatums() only adds DTYPE_VAR variables to that list. This bug appears to have been there since the aboriginal addition of plpgsql to our tree. Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables as well. (We don't need to consider other DTYPEs because they don't represent separately-stored values.) I failed to resist the temptation to make some nearby cosmetic adjustments, too. No back-patch, because there have not been field complaints, and it seems possible that somewhere out there someone has code depending on the incorrect behavior. In any case this change would have no impact on correctly-written code. Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us
* Fix regression test outputMagnus Hagander2017-12-09
| | | | Missed this in the last commit.
* Fix typoMagnus Hagander2017-12-09
| | | | Reported by Robins Tharakan
* MSVC 2012+: Permit linking to 32-bit, MinGW-built libraries.Noah Misch2017-12-09
| | | | | | | | | | | | | | | | | | Notably, this permits linking to the 32-bit Perl binaries advertised on perl.org, namely Strawberry Perl and ActivePerl. This has a side effect of permitting linking to binaries built with obsolete MSVC versions. By default, MSVC 2012 and later require a "safe exception handler table" in each binary. MinGW-built, 32-bit DLLs lack the relevant exception handler metadata, so linking to them failed with error LNK2026. Restore the semantics of MSVC 2010, which omits the table from a given binary if some linker input lacks metadata. This has no effect on 64-bit builds or on MSVC 2010 and earlier. Back-patch to 9.3 (all supported versions). Reported by Victor Wagner. Discussion: https://postgr.es/m/20160326154321.7754ab8f@wagner.wagner.home
* MSVC: Test whether 32-bit Perl needs -D_USE_32BIT_TIME_T.Noah Misch2017-12-08
| | | | | | | | | | | | | | | | | | | | | Commits 5a5c2feca3fd858e70ea348822595547e6fa6c15 and b5178c5d08ca59e30f9d9428fa6fdb2741794e65 introduced support for modern MSVC-built, 32-bit Perl, but they broke use of MinGW-built, 32-bit Perl distributions like Strawberry Perl and modern ActivePerl. Perl has no robust means to report whether it expects a -D_USE_32BIT_TIME_T ABI, so test this. Back-patch to 9.3 (all supported versions). The chief alternative was a heuristic of adding -D_USE_32BIT_TIME_T when $Config{gccversion} is nonempty. That banks on every gcc-built Perl using the same ABI. gcc could change its default ABI the way MSVC once did, and one could build Perl with gcc and the non-default ABI. The GNU make build system could benefit from a similar test, without which it does not support MSVC-built Perl. For now, just add a comment. Most users taking the special step of building Perl with MSVC probably build PostgreSQL with MSVC. Discussion: https://postgr.es/m/20171130041441.GA3161526@rfd.leadboat.com
* Prohibit identity columns on typed tables and partitionsPeter Eisentraut2017-12-08
| | | | | | | | | | Those cases currently crash and supporting them is more work then originally thought, so we'll just prohibit these scenarios for now. Author: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> Reported-by: Мансур Галиев <gomer94@yandex.ru> Bug: #14866
* Fix mistake in commentPeter Eisentraut2017-12-08
| | | | Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>
* In plpgsql, unify duplicate variables for record and row cases.Tom Lane2017-12-08
| | | | | | | | | | | | | | | | | | plpgsql's function exec_move_row() handles assignment of a composite source value to either a PLpgSQL_rec or PLpgSQL_row target variable. Oddly, rather than taking a single target argument which it could do run-time type detection on, it was coded to take two separate arguments (only one of which is allowed to be non-NULL). This choice had then back-propagated into storing two separate target variables in various plpgsql statement nodes, with lots of duplicative coding and awkward interface logic to support that. Simplify matters by folding those pairs down to single variables, distinguishing the two cases only where we must ... which turns out to be only in exec_move_row itself. This is purely refactoring and should not change any behavior. In passing, remove unused field PLpgSQL_stmt_open.returntype. Discussion: https://postgr.es/m/11787.1512713374@sss.pgh.pa.us
* Apply identity sequence values on COPYPeter Eisentraut2017-12-08
| | | | | | | | | | | A COPY into a table should apply identity sequence values just like it does for ordinary defaults. This was previously forgotten, leading to null values being inserted, which in turn would fail because identity columns have not-null constraints. Author: Michael Paquier <michael.paquier@gmail.com> Reported-by: Steven Winfield <steven.winfield@cantabcapital.com> Bug: #14952
* Speed up isolation test for concurrent VACUUM/ANALYZE behavior.Robert Haas2017-12-07
| | | | | | | | Per Tom Lane, the old test sometimes times out with CLOBBER_CACHE_ALWAYS. Nathan Bossart Discussion: http://postgr.es/m/28614.1512583046@sss.pgh.pa.us
* Report failure to start a background worker.Robert Haas2017-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a worker is flagged as BGW_NEVER_RESTART and we fail to start it, or if it is not marked BGW_NEVER_RESTART but is terminated before startup succeeds, what BgwHandleStatus should be reported? The previous code really hadn't considered this possibility (as indicated by the comments which ignore it completely) and would typically return BGWH_NOT_YET_STARTED, but that's not a good answer, because then there's no way for code using GetBackgroundWorkerPid() to tell the difference between a worker that has not started but will start later and a worker that has not started and will never be started. So, when this case happens, return BGWH_STOPPED instead. Update the comments to reflect this. The preceding fix by itself is insufficient to fix the problem, because the old code also didn't send a notification to the process identified in bgw_notify_pid when startup failed. That might've been technically correct under the theory that the status of the worker was BGWH_NOT_YET_STARTED, because the status would indeed not change when the worker failed to start, but now that we're more usefully reporting BGWH_STOPPED, a notification is needed. Without these fixes, code which starts background workers and then uses the recommended APIs to wait for those background workers to start would hang indefinitely if the postmaster failed to fork a worker. Amit Kapila and Robert Haas Discussion: http://postgr.es/m/CAA4eK1KDfKkvrjxsKJi3WPyceVi3dH1VCkbTJji2fuwKuB=3uw@mail.gmail.com