aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix plpgsql to enforce domain checks when returning a NULL domain value.Tom Lane2018-02-15
| | | | | | | | | | | | | | | If a plpgsql function is declared to return a domain type, and the domain's constraints forbid a null value, it was nonetheless possible to return NULL, because we didn't bother to check the constraints for a null result. I'd noticed this while fooling with domains-over-composite, but had not gotten around to fixing it immediately. Add a regression test script exercising this and various other domain cases, largely borrowed from the plpython_types test. Although this is clearly a bug fix, I'm not sure whether anyone would thank us for changing the behavior in stable branches, so I'm inclined not to back-patch.
* Cast to void in StaticAssertExpr, not its callers.Tom Lane2018-02-15
| | | | | | | | | | | Seems a bit silly that many (in fact all, as of today) uses of StaticAssertExpr would need to cast it to void to avoid warnings from pickier compilers. Let's just do the cast right in the macro, instead. In passing, change StaticAssertExpr to StaticAssertStmt in one place where that seems more apropos. Discussion: https://postgr.es/m/16161.1518715186@sss.pgh.pa.us
* Move the extern declaration for ExceptionalCondition into c.h.Tom Lane2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | This is the logical conclusion of our decision to support Assert() in both frontend and backend code: it should be possible to use that after including just c.h. But as things were arranged before, if you wanted to use Assert() in code that might be compiled for either environment, you had to include postgres.h for the backend case. Let's simplify that. Per buildfarm, some of whose members started throwing warnings after commit 0c62356cc added an Assert in src/port/snprintf.c. It's possible that some other src/port files that use the stanza #ifndef FRONTEND #include "postgres.h" #else #include "postgres_fe.h" #endif could now be simplified to just say '#include "c.h"'. I have not tested for that, though, and it'd be unlikely to apply for more than a small number of them.
* Revert "Stabilize output of new regression test case".Tom Lane2018-02-14
| | | | | | | | | This effectively reverts commit 9edc97b71 (although the test is now in a different place and has different contents). We don't need that hack anymore, because since commit 4b93f5799, this test case no longer throws an error and so there's no displayed CONTEXT that could vary depending on CLOBBER_CACHE_ALWAYS. The underlying unstable-output problem isn't really gone, of course, but it no longer manifests here.
* Stabilize new plpgsql_record regression tests.Tom Lane2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | The buildfarm's CLOBBER_CACHE_ALWAYS animals aren't happy with some of the test cases added in commit 4b93f5799. There are two different problems: * In two places, a different CONTEXT stack is shown because the error is detected in a different place, due to recompiling an expression from scratch rather than re-using a previously cached plan for it. I fixed these via the expedient of hiding the CONTEXT stack altogether. * In one place, a test expected to fail (because a cached plan hadn't been updated) actually succeeds (because the forced recompile makes it good). I couldn't think of a simple workaround for this, so I've just commented out that test step altogether. I have hopes of improving things enough that both of these kluges can be reverted eventually. The first one is the same kind of problem previously discussed at https://postgr.es/m/31545.1512924904@sss.pgh.pa.us but there was insufficient agreement about how to fix it, so we just hacked around the output instability (commit 9edc97b71). The second issue should be fixed by allowing the plan to be rebuilt when a type conflict is detected. But for today, let's just make the buildfarm green again.
* Return implementation defined value if pg_$op_s$bit_overflow overflows.Andres Freund2018-02-14
| | | | | | | | | | | | | | | | | | | Some older compilers otherwise sometimes complain about undefined values, even though the return value should not be used in the overflow case. We assume that any decent compiler will optimize away the unnecessary assignment in performance critical cases. We do not want to restrain the returned value to a specific value, e.g. 0 or the wrapped-around value, because some fast ways to implement overflow detecting math do not easily allow for that (e.g. msvc intrinsics). As the function documentation already documents the returned value in case of intrinsics to be implementation defined, no documentation has to be updated. Per complaint from Tom Lane and his buildfarm member prairiedog. Author: Andres Freund Discussion: https://postgr.es/m/18169.1513958454@sss.pgh.pa.us
* Silence assorted "variable may be used uninitialized" warnings.Tom Lane2018-02-14
| | | | | | | | | | | | | | | All of these are false positives, but in each case a fair amount of analysis is needed to see that, and it's not too surprising that not all compilers are smart enough. (In particular, in the logtape.c case, a compiler lacking the knowledge provided by the Assert would almost surely complain, so that this warning will be seen in any non-assert build.) Some of these are of long standing while others are pretty recent, but it only seems worth fixing them in HEAD. Jaime Casanova, tweaked a bit by me Discussion: https://postgr.es/m/CAJGNTeMcYAMJdPAom52dppLMtF-UnEZi0dooj==75OEv1EoBZA@mail.gmail.com
* Add an assertion that we don't pass NULL to snprintf("%s").Tom Lane2018-02-14
| | | | | | | | | | | | | | | Per commit e748e902d, we appear to have little or no coverage in the buildfarm of machines that will dump core when asked to printf a null string pointer. Let's try to improve that situation by adding an assertion that will make src/port/snprintf.c behave that way. Since it's just an assertion, it won't break anything in production builds, but it will help developers find this type of oversight. Note that while our buildfarm coverage of machines that use that snprintf implementation is pretty thin on the Unix side (apparently amounting only to gaur/pademelon), all of the MSVC critters use it. Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
* Fix broken logic for reporting PL/Python function names in errcontext.Tom Lane2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | plpython_error_callback() reported the name of the function associated with the topmost PL/Python execution context. This was not merely wrong if there were nested PL/Python contexts, but it risked a core dump if the topmost one is an inline code block rather than a named function. That will have proname = NULL, and so we were passing a NULL pointer to snprintf("%s"). It seems that none of the PL/Python-testing machines in the buildfarm will dump core for that, but some platforms do, as reported by Marina Polyakova. Investigation finds that there actually is an existing regression test that used to prove that the behavior was wrong, though apparently no one had noticed that it was printing the wrong function name. It stopped showing the problem in 9.6 when we adjusted psql to not print CONTEXT by default for NOTICE messages. The problem is masked (if your platform avoids the core dump) in error cases, because PL/Python will throw away the originally generated error info in favor of a new traceback produced at the outer level. Repair by using ErrorContextCallback.arg to pass the correct context to the error callback. Add a regression test illustrating correct behavior. Back-patch to all supported branches, since they're all broken this way. Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
* Support CONSTANT/NOT NULL/initial value for plpgsql composite variables.Tom Lane2018-02-13
| | | | | | | | | | | | | | | | | These features were never implemented previously for composite or record variables ... not that the documentation admitted it, so there's no doc updates here. This also fixes some issues concerning enforcing DOMAIN NOT NULL constraints against plpgsql variables, although I'm not sure that that topic is completely dealt with. I created a new plpgsql test file for these features, and moved the one relevant existing test case into that file. Tom Lane, reviewed by Daniel Gustafsson Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us
* Speed up plpgsql trigger startup by introducing "promises".Tom Lane2018-02-13
| | | | | | | | | | | | | | | | | | Over the years we've accreted quite a few special variables that are predefined in plpgsql trigger functions. The cost of initializing these variables to their defined values turns out to be a significant part of the runtime of simple triggers; but, undoubtedly, most real-world triggers never examine the values of most of these variables. To improve matters, invent the notion of a variable that has a "promise" attached to it, specifying which of the predetermined values should be assigned to the variable if anything ever reads it. This eliminates all the unneeded startup overhead, in return for a small penalty on accesses to these variables. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
* Speed up plpgsql function startup by doing fewer pallocs.Tom Lane2018-02-13
| | | | | | | | | | | | | | | | | | Previously, copy_plpgsql_datum did a separate palloc for each variable needing instance-local storage. In simple benchmarks this made for a noticeable fraction of the total runtime. Improve it by precalculating the space needed for all of a function's variables and doing just one palloc for all of them. In passing, remove PLPGSQL_DTYPE_EXPR from the list of plpgsql "datum" types, since in fact it has nothing in common with the others, and there is noplace that needs to discriminate on the basis of dtype between an expression and any type of datum. And add comments clarifying which datum struct fields are generic and which aren't. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
* Make plpgsql use its DTYPE_REC code paths for composite-type variables.Tom Lane2018-02-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Formerly, DTYPE_REC was used only for variables declared as "record"; variables of named composite types used DTYPE_ROW, which is faster for some purposes but much less flexible. In particular, the ROW code paths are entirely incapable of dealing with DDL-caused changes to the number or data types of the columns of a row variable, once a particular plpgsql function has been parsed for the first time in a session. And, since the stored representation of a ROW isn't a tuple, there wasn't any easy way to deal with variables of domain-over-composite types, since the domain constraint checking code would expect the value to be checked to be a tuple. A lesser, but still real, annoyance is that ROW format cannot represent a true NULL composite value, only a row of per-field NULL values, which is not exactly the same thing. Hence, switch to using DTYPE_REC for all composite-typed variables, whether "record", named composite type, or domain over named composite type. DTYPE_ROW remains but is used only for its native purpose, to represent a fixed-at-compile-time list of variables, for instance the targets of an INTO clause. To accomplish this without taking significant performance losses, introduce infrastructure that allows storing composite-type variables as "expanded objects", similar to the "expanded array" infrastructure introduced in commit 1dc5ebc90. A composite variable's value is thereby kept (most of the time) in the form of separate Datums, so that field accesses and updates are not much more expensive than they were in the ROW format. This holds the line, more or less, on performance of variables of named composite types in field-access-intensive microbenchmarks, and makes variables declared "record" perform much better than before in similar tests. In addition, the logic involved with enforcing composite-domain constraints against updates of individual fields is in the expanded record infrastructure not plpgsql proper, so that it might be reusable for other purposes. In further support of this, introduce a typcache feature for assigning a unique-within-process identifier to each distinct tuple descriptor of interest; in particular, DDL alterations on composite types result in a new identifier for that type. This allows very cheap detection of the need to refresh tupdesc-dependent data. This improves on the "tupDescSeqNo" idea I had in commit 687f096ea: that assigned identifying sequence numbers to successive versions of individual composite types, but the numbers were not unique across different types, nor was there support for assigning numbers to registered record types. In passing, allow plpgsql functions to accept as well as return type "record". There was no good reason for the old restriction, and it was out of step with most of the other PLs. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
* Add procedure support to pg_get_functiondefPeter Eisentraut2018-02-13
| | | | | | This also makes procedures work in psql's \ef and \sf commands. Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
* Add tests for pg_get_functiondefPeter Eisentraut2018-02-13
|
* Fix typoPeter Eisentraut2018-02-13
|
* In LDAP test, restart after pg_hba.conf changesPeter Eisentraut2018-02-13
| | | | | | | | | Instead of issuing a reload after pg_hba.conf changes between test cases, run a full restart. With a reload, an error in the new pg_hba.conf is ignored and the tests will continue to run with the old settings, invalidating the subsequent test cases. With a restart, a faulty pg_hba.conf will lead to the test being aborted, which is what we'd rather want.
* Fix typoPeter Eisentraut2018-02-12
| | | | Author: Masahiko Sawada <sawada.mshk@gmail.com>
* get_relid_attribute_name is dead, long live get_attnameAlvaro Herrera2018-02-12
| | | | | | | | | The modern way is to use a missing_ok argument instead of two separate almost-identical routines, so do that. Author: Michaël Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/20180201063212.GE6398@paquier.xyz
* Fix parallel index builds for dynamic_shared_memory_type=none.Robert Haas2018-02-12
| | | | | | | | | | | The previous code failed to realize that this setting effectively disables parallelism, and would crash if it decided to attempt parallelism anyway. Instead, treat it as a disabling condition. Kyotaro Horiguchi, who also reported the issue. Reviewed by Michael Paquier and Peter Geoghegan. Discussion: http://postgr.es/m/20180209.170635.256350357.horiguchi.kyotaro@lab.ntt.co.jp
* psql: give ^D hint for \q in place where \q is ignoredBruce Momjian2018-02-12
| | | | | | Also add comment on why exit/quit are not documented. Discussion: https://postgr.es/m/20180202053928.GA13472@momjian.us
* Fix assorted errors in pg_dump's handling of extended statistics objects.Tom Lane2018-02-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_dump supposed that a stats object necessarily shares the same schema as its underlying table, and that it doesn't have a separate owner. These things may have been true during early development of the feature, but they are not true as of v10 release. Failure to track the object's schema separately turns out to have only limited consequences, because pg_get_statisticsobjdef() always schema- qualifies the target object name in the generated CREATE STATISTICS command (a decision out of step with the rest of ruleutils.c, but I digress). Therefore the restored object would be in the right schema, so that the only problem is that the TOC entry would be mislabeled as to schema. That could lead to wrong decisions for schema-selective restores, for example. The ownership issue is a bit more serious: not only was the TOC entry potentially mislabeled as to owner, but pg_dump didn't bother to issue an ALTER OWNER command at all, so that after restore the stats object would continue to be owned by the restoring superuser. A final point is that decisions as to whether to dump a stats object or not were driven by whether the underlying table was dumped or not. While that's not wrong on its face, it won't scale nicely to the planned future extension to cross-table statistics. Moreover, that design decision comes out of the view of stats objects as being auxiliary to a particular table, like a rule or trigger, which is exactly where the above problems came from. Since we're now treating stats objects more like independent objects in their own right, they ought to behave like standalone objects for this purpose too. So change to using the generic selectDumpableObject() logic for them (which presently amounts to "dump if containing schema is to be dumped"). Along the way to fixing this, restructure so that getExtendedStatistics collects the identity info (only) for all extended stats objects in one query, and then for each object actually being dumped, we retrieve the definition in dumpStatisticsExt. This is necessary to ensure that schema-qualification in the generated CREATE STATISTICS command happens with respect to the search path that pg_dump will now be using at restore time (ie, the schema the stats object is in, not that of the underlying table). It's probably also significantly faster in the typical scenario where only a minority of tables have extended stats. Back-patch to v10 where extended stats were introduced. Discussion: https://postgr.es/m/18272.1518328606@sss.pgh.pa.us
* Avoid premature free of pass-by-reference CALL arguments.Tom Lane2018-02-10
| | | | | | | | | | | | | | | | | | | | | Prematurely freeing the EState used to evaluate CALL arguments led, in some cases, to passing dangling pointers to the procedure. This was masked in trivial cases because the argument pointers would point to Const nodes in the original expression tree, and in some other cases because the result value would end up in the standalone ExprContext rather than in memory belonging to the EState --- but that wasn't exactly high quality programming either, because the standalone ExprContext was never explicitly freed, breaking assorted API contracts. In addition, using a separate EState for each argument was just silly. So let's use just one EState, and one ExprContext, and make the latter belong to the former rather than be standalone, and clean up the EState (and hence the ExprContext) post-call. While at it, improve the function's commentary a bit. Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
* Fix oversight in CALL argument handling, and do some minor cleanup.Tom Lane2018-02-10
| | | | | | | | | | | | | | | | | | | CALL statements cannot support sub-SELECTs in the arguments of the called procedure, since they just use ExecEvalExpr to evaluate such arguments. Teach transformSubLink() to reject the case, as it already does for other contexts in which subqueries are not supported. In passing, s/EXPR_KIND_CALL/EXPR_KIND_CALL_ARGUMENT/ to make that enum symbol line up more closely with the phrasing of the error messages it is associated with. And fix someone's weak grasp of English grammar in the preceding EXPR_KIND_PARTITION_EXPRESSION addition. Also update an incorrect comment in resolve_unique_index_expr (possibly it was correct when written, but nowadays transformExpr definitely does reject SRFs here). Per report from Pavel Stehule --- but this resolves only one of the bugs he mentions. Discussion: https://postgr.es/m/CAFj8pRDxOwPPzpA8i+AQeDQFj7bhVw-dR2==rfWZ3zMGkm568Q@mail.gmail.com
* Mark assorted GUC variables as PGDLLIMPORT.Robert Haas2018-02-09
| | | | | | | | This makes life easier for extension authors. Metin Doslu Discussion: http://postgr.es/m/CAL1dPcfa45o1dC-c4t-48v0OZE6oy4ChJhObrtkK8mzNfXqDTA@mail.gmail.com
* Clear stmt_timeout_active if we disable_all_timeouts.Robert Haas2018-02-09
| | | | | | | | | | | Otherwise, we can end up with the flag set when the timeout is actually disabled, leading to misbehavior. Commit f8e5f156b30efee5d0038b03e38735773abcb7ed introduced this bug. Reported by Peter Eisentraut. Analysis and fix by Thomas Munro, tweaked by me. Discussion: http://postgr.es/m/6a909374-2602-7136-8c70-397330a418f3@2ndquadrant.com
* Fix incorrect method name in comment.Robert Haas2018-02-08
| | | | | | Atsushi Torikoshi Discussion: http://postgr.es/m/1b056262-4bc0-a982-c899-bb67a0a7fd52@lab.ntt.co.jp
* Avoid listing the same ResultRelInfo in more than one EState list.Robert Haas2018-02-08
| | | | | | | | Doing so causes EXPLAIN ANALYZE to show trigger statistics multiple times. Commit 2f178441044be430f6b4d626e4dae68a9a6f6cec seems to be to blame for this. Amit Langote, revieed by Amit Khandekar, Etsuro Fujita, and me.
* Fix possible infinite loop with Parallel Append.Robert Haas2018-02-08
| | | | | | | | | | | | When the previously-chosen plan was non-partial, all pa_finished flags for partial plans are now set, and pa_next_plan has not yet been set to INVALID_SUBPLAN_INDEX, the previous code could go into an infinite loop. Report by Rajkumar Raghuwanshi. Patch by Amit Khandekar and me. Review by Kyotaro Horiguchi. Discussion: http://postgr.es/m/CAJ3gD9cf43z78qY=U=H0HvOEN341qfRO-vLpnKPSviHeWgJQ5w@mail.gmail.com
* Refine SSL tests test name reportingPeter Eisentraut2018-02-08
| | | | | | | | | Instead of using the psql/libpq connection string as the displayed test name and relying on "notes" and source code comments to explain the tests, give the tests self-explanatory names, like we do elsewhere. Reviewed-by: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
* Make new triggers tests more robustPeter Eisentraut2018-02-07
| | | | | | Add explicit collation on the trigger name to avoid locale dependencies. Also restrict the tables selected, to avoid interference from concurrently running tests.
* Add more information_schema columnsPeter Eisentraut2018-02-07
| | | | | | | | | - table_constraints.enforced - triggers.action_order - triggers.action_reference_old_table - triggers.action_reference_new_table Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Update out-of-date comment in StartupXLOG.Robert Haas2018-02-07
| | | | | | | | | Commit 4b0d28de06b28e57c540fca458e4853854fbeaf8 should have updated this comment, but did not. Thomas Munro Discussion: http://postgr.es/m/CAEepm=0iJ8aqQcF9ij2KerAkuHF3SwrVTzjMdm1H4w++nfBf9A@mail.gmail.com
* Remove prototype for fmgr() function, which no longer exists.Robert Haas2018-02-07
| | | | | | | | | | Commit 5ded4bd21403e143dd3eb66b92d52732fdac1945 removed the code for this function, but neglected to remove the prototype and associated comments. Dagfinn Ilmari Mannsåker Discussion: http://postgr.es/m/d8j4lmuxjzk.fsf@dalvik.ping.uio.no
* Support all SQL:2011 options for window frame clauses.Tom Lane2018-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING" frame boundaries in window functions. We'd punted on that back in the original patch to add window functions, because it was not clear how to do it in a reasonably data-type-extensible fashion. That problem is resolved here by adding the ability for btree operator classes to provide an "in_range" support function that defines how to add or subtract the RANGE offset value. Factoring it this way also allows the operator class to avoid overflow problems near the ends of the datatype's range, if it wishes to expend effort on that. (In the committed patch, the integer opclasses handle that issue, but it did not seem worth the trouble to avoid overflow failures for datetime types.) The patch includes in_range support for the integer_ops opfamily (int2/int4/int8) as well as the standard datetime types. Support for other numeric types has been requested, but that seems like suitable material for a follow-on patch. In addition, the patch adds GROUPS mode which counts the offset in ORDER-BY peer groups rather than rows, and it adds the frame_exclusion options specified by SQL:2011. As far as I can see, we are now fully up to spec on window framing options. Existing behaviors remain unchanged, except that I changed the errcode for a couple of existing error reports to meet the SQL spec's expectation that negative "offset" values should be reported as SQLSTATE 22013. Internally and in relevant parts of the documentation, we now consistently use the terminology "offset PRECEDING/FOLLOWING" rather than "value PRECEDING/FOLLOWING", since the term "value" is confusingly vague. Oliver Ford, reviewed and whacked around some by me Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
* Fix incorrect grammar.Robert Haas2018-02-06
| | | | | | Etsuro Fujita Discussion: http://postgr.es/m/5A7981EA.8020201@lab.ntt.co.jp
* Avoid valgrind complaint about write() of uninitalized bytes.Robert Haas2018-02-06
| | | | | | | | | | | | | | | | | | | | LogicalTapeFreeze() may write out its first block when it is dirty but not full, and then immediately read the first block back in from its BufFile as a BLCKSZ-width block. This can only occur in rare cases where very few tuples were written out, which is currently only possible with parallel external tuplesorts. To avoid valgrind complaints, tell it to treat the tail of logtape.c's buffer as defined. Commit 9da0cc35284bdbe8d442d732963303ff0e0a40bc exposed this problem but did not create it. LogicalTapeFreeze() has always tended to write out some amount of garbage bytes, but previously never wrote less than one block of data in total, so the problem was masked. Per buildfarm members lousyjack and skink. Peter Geoghegan, based on a suggestion from Tom Lane and me. Some comment revisions by me.
* Doc: move info for btree opclass implementors into main documentation.Tom Lane2018-02-06
| | | | | | | | | | | | | | | Up to now, useful info for writing a new btree opclass has been buried in the backend's nbtree/README file. Let's move it into the SGML docs, in preparation for extending it with info about "in_range" functions in the upcoming window RANGE patch. To do this, I chose to create a new chapter for btree indexes in Part VII (Internals), parallel to the chapters that exist for the newer index AMs. This is a pretty short chapter as-is. At some point somebody might care to flesh it out with more detail about btree internals, but that is beyond the scope of my ambition for today. Discussion: https://postgr.es/m/23141.1517874668@sss.pgh.pa.us
* Fix possible crash in partition-wise join.Robert Haas2018-02-05
| | | | | | | | | | | The previous code assumed that we'd always succeed in creating child-joins for a joinrel for which partition-wise join was considered, but that's not guaranteed, at least in the case where dummy rels are involved. Ashutosh Bapat, with some wordsmithing by me. Discussion: http://postgr.es/m/CAFjFpRf8=uyMYYfeTBjWDMs1tR5t--FgOe2vKZPULxxdYQ4RNw@mail.gmail.com
* Ensure that all temp files made during pg_upgrade are non-world-readable.Tom Lane2018-02-05
| | | | | | | | | | | | | | | | | | | | | | | pg_upgrade has always attempted to ensure that the transient dump files it creates are inaccessible except to the owner. However, refactoring in commit 76a7650c4 broke that for the file containing "pg_dumpall -g" output; since then, that file was protected according to the process's default umask. Since that file may contain role passwords (hopefully encrypted, but passwords nonetheless), this is a particularly unfortunate oversight. Prudent users of pg_upgrade on multiuser systems would probably run it under a umask tight enough that the issue is moot, but perhaps some users are depending only on pg_upgrade's umask changes to protect their data. To fix this in a future-proof way, let's just tighten the umask at process start. There are no files pg_upgrade needs to write at a weaker security level; and if there were, transiently relaxing the umask around where they're created would be a safer approach. Report and patch by Tom Lane; the idea for the fix is due to Noah Misch. Back-patch to all supported branches. Security: CVE-2018-1053
* Fix RelationBuildPartitionKey's processing of partition key expressions.Tom Lane2018-02-05
| | | | | | | | | | | | | Failure to advance the list pointer while reading partition expressions from a list results in invoking an input function with inappropriate data, possibly leading to crashes or, with carefully crafted input, disclosure of arbitrary backend memory. Bug discovered independently by Álvaro Herrera and David Rowley. This patch is by Álvaro but owes something to David's proposed fix. Back-patch to v10 where the issue was introduced. Security: CVE-2018-1052
* Skip setting up shared instrumentation for Hash node if not needed.Tom Lane2018-02-04
| | | | | | | | | | | | | | We don't need to set up the shared space for hash join instrumentation data if instrumentation hasn't been requested. Let's follow the example of the similar Sort node code and save a few cycles by skipping that when we can. This reverts commit d59ff4ab3 and instead allows us to use the safer choice of passing noError = false to shm_toc_lookup in ExecHashInitializeWorker, since if we reach that call there should be a TOC entry to be found. Thomas Munro Discussion: https://postgr.es/m/E1ehkoZ-0005uW-43%40gemulon.postgresql.org
* Fix another instance of unsafe coding for shm_toc_lookup failure.Tom Lane2018-02-02
| | | | | | | | One or another author of commit 5bcf389ec seems to have thought that computing an offset from a NULL pointer would yield another NULL pointer. There may possibly be architectures where that works, but common machines don't work like that. Per a quick code review of places calling shm_toc_lookup and not using noError = false.
* Be more wary about shm_toc_lookup failure.Tom Lane2018-02-02
| | | | | | | | | | | | | | | | Commit 445dbd82a basically missed the point of commit d46633506, which was that we shouldn't allow shm_toc_lookup() failure to lead to a core dump or assertion crash, because the odds of such a failure should never be considered negligible. It's correct that we can't expect the PARALLEL_KEY_ERROR_QUEUE TOC entry to be there if we have no workers. But if we have no workers, we're not going to do anything in this function with the lookup result anyway, so let's just skip it. That lets the code use the easy-to-prove-safe noError=false case, rather than anything requiring effort to review. Back-patch to v10, like the previous commit. Discussion: https://postgr.es/m/3647.1517601675@sss.pgh.pa.us
* Fix application of identity values in some casesPeter Eisentraut2018-02-02
| | | | | | | | | | | | | | | | | | | | Investigation of 2d2d06b7e27e3177d5bef0061801c75946871db3 revealed that identity values were not applied in some further cases, including logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD COLUMN. To fix all that, apply the identity column expression in build_column_default() instead of repeating the same logic at each call site. For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding completely ignored that existing rows for the new column should have values filled in from the identity sequence. The coding using build_column_default() fails for this because the sequence ownership isn't registered until after ALTER TABLE, and we can't do it before because we don't have the column in the catalog yet. So we specially remember in ColumnDef the sequence name that we decided on and build a custom NextValueExpr using that. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Support parallel btree index builds.Robert Haas2018-02-02
| | | | | | | | | | | | | | | | | | | | | | To make this work, tuplesort.c and logtape.c must also support parallelism, so this patch adds that infrastructure and then applies it to the particular case of parallel btree index builds. Testing to date shows that this can often be 2-3x faster than a serial index build. The model for deciding how many workers to use is fairly primitive at present, but it's better than not having the feature. We can refine it as we get more experience. Peter Geoghegan with some help from Rushabh Lathia. While Heikki Linnakangas is not an author of this patch, he wrote other patches without which this feature would not have been possible, and therefore the release notes should possibly credit him as an author of this feature. Reviewed by Claudio Freire, Heikki Linnakangas, Thomas Munro, Tels, Amit Kapila, me. Discussion: http://postgr.es/m/CAM3SWZQKM=Pzc=CAHzRixKjp2eO5Q0Jg1SoFQqeXFQ647JiwqQ@mail.gmail.com Discussion: http://postgr.es/m/CAH2-Wz=AxWqDoVvGU7dq856S4r6sJAj6DBn7VMtigkB33N5eyg@mail.gmail.com
* Refactor code for partition bound searchingRobert Haas2018-02-02
| | | | | | | | | | | | | | | | | | | | | | | | Remove partition_bound_cmp() and partition_bound_bsearch(), whose void * argument could be, depending on the situation, of any of three different types: PartitionBoundSpec *, PartitionRangeBound *, Datum *. Instead, introduce separate bound-searching functions for each situation: partition_list_bsearch, partition_range_bsearch, partition_range_datum_bsearch, and partition_hash_bsearch. This requires duplicating the code for binary search, but it makes the code much more type safe, involves fewer branches at runtime, and at least in my opinion, is much easier to understand. Along the way, add an option to partition_range_datum_bsearch allowing the number of keys to be specified, so that we can search for partitions based on a prefix of the full list of partition keys. This is important for pending work to improve partition pruning. Amit Langote, per a suggestion from me. Discussion: http://postgr.es/m/CA+TgmoaVLDLc8=YESRwD32gPhodU_ELmXyKs77gveiYp+JE4vQ@mail.gmail.com
* Add new function WaitForParallelWorkersToAttach.Robert Haas2018-02-02
| | | | | | | | | | | | | | | | Once this function has been called, we know that all workers have started and attached to their error queues -- so if any of them subsequently exit uncleanly, we'll be sure to throw an ERROR promptly. Otherwise, users of the ParallelContext machinery must be careful not to wait forever for a worker that has failed to start. Parallel query manages to work without needing this for reasons explained in new comments added by this patch, but it's a useful primitive for other parallel operations, such as the pending patch to make creating a btree index run in parallel. Amit Kapila, revised by me. Additional review by Peter Geoghegan. Discussion: http://postgr.es/m/CAA4eK1+e2MzyouF5bg=OtyhDSX+=Ao=3htN=T-r_6s3gCtKFiw@mail.gmail.com
* Fix possible failure to mark hash metapage dirty.Robert Haas2018-02-01
| | | | | | | Report and suggested fix by Lixian Zou. Amit Kapila put it in the form of a patch and reviewed. Discussion: http://postgr.es/m/151739848647.1239.12528851873396651946@wrigleys.postgresql.org
* psql: Add quit/help behavior/hint, for other tool portabilityBruce Momjian2018-02-01
| | | | | | | | | | | | | | Issuing 'quit'/'exit' in an empty psql buffer exits psql. Issuing 'quit'/'exit' in a non-empty psql buffer alone on a line with no prefix whitespace issues a hint on how to exit. Also add similar 'help' hints for 'help' in a non-empty psql buffer. Reported-by: Everaldo Canuto Discussion: https://postgr.es/m/flat/CALVFHFb-C_5_94hueWg6Dd0zu7TfbpT7hzsh9Zf0DEDOSaAnfA%40mail.gmail.com Author: original author Robert Haas, modified by me