aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix multiple minor infelicities in aclchk.c error reports.Tom Lane2016-06-13
| | | | | | | | | | | | | | | | pg_type_aclmask reported the wrong type's OID when complaining that it could not find a type's typelem. It also failed to provide a suitable errcode when the initially given OID doesn't exist (which is a user-facing error, since that OID can be user-specified). pg_foreign_data_wrapper_aclmask and pg_foreign_server_aclmask likewise lacked errcode specifications. Trivial cosmetic adjustments too. The wrong-type-OID problem was reported by Petru-Florin Mihancea in bug #14186; the other issues noted by me while reading the code. These errors all seem to be aboriginal in the respective routines, so back-patch as necessary. Report: <20160613163159.5798.52928@wrigleys.postgresql.org>
* In planner.c, avoid assuming that all PathTargets have sortgrouprefs.Tom Lane2016-06-13
| | | | | | | | | | | | | The struct definition for PathTarget specifies that a NULL sortgrouprefs pointer means no sortgroupref labels. While it's likely that there should always be at least one labeled column in the places that were unconditionally fetching through the pointer, it seems wiser to adhere to the data structure specification and test first. Add a macro to make this convenient. Per experimentation with running the regression tests with a very small parallelization threshold --- the crash I observed may well represent a bug elsewhere, but still this coding was not very robust. Report: <20756.1465834072@sss.pgh.pa.us>
* Remove extraneous leading whitespace in Windows build script.Tom Lane2016-06-13
| | | | | | | | | Apparently, at least some versions of Microsoft's shell fail on variable assignments that have leading whitespace. This instance, introduced in commit 680513ab7, managed to escape notice for awhile because it's only invoked if building with OpenSSL. Per bug #14185 from Torben Dannhauer. Report: <20160613140119.5798.78501@wrigleys.postgresql.org>
* Finish pgindent run for 9.6: Perl files.Noah Misch2016-06-12
|
* Document the authoritative version of perltidy.Noah Misch2016-06-12
| | | | | Every whole-tree perltidy run has used this version, firmly establishing it as the de facto standard.
* PL/Python: Rename new keyword arguments of plpy.error() etc.Peter Eisentraut2016-06-11
| | | | | Rename schema -> schema_name etc. to remain consistent with C API and PL/pgSQL.
* Change default of backend_flush_after GUC to 0 (disabled).Andres Freund2016-06-10
| | | | | | | | | | | | | | | | | | | | | | | While beneficial, both for throughput and average/worst case latency, in a significant number of workloads, there are other workloads in which backend_flush_after can cause significant performance regressions in comparison to < 9.6 releases. The regression is most likely when the hot data set is bigger than shared buffers, but significantly smaller than the operating system's page cache. I personally think that the benefit of enabling backend flush control is considerably bigger than the potential downsides, but a fair argument can be made that not regressing is more important than improving performance/latency. As the latter is the consensus, change the default to 0. The other settings introduced in 428b1d6b2 do not have the same potential for regressions, so leave them enabled. Benchmarks leading up to changing the default have been performed by Mithun Cy, Ashutosh Sharma and Robert Haas. Discussion: CAD__OuhPmc6XH=wYRm_+Q657yQE88DakN4=Ybh2oveFasHkoeA@mail.gmail.com
* Remove reltarget_has_non_vars flag.Tom Lane2016-06-10
| | | | | | | | | | Commit b12fd41c6 added a "reltarget_has_non_vars" field to RelOptInfo, but failed to maintain it accurately. Since its only purpose was to skip calls to has_parallel_hazard() in the simple case where a rel's targetlist is all Vars, and that call is really pretty cheap in that case anyway, it seems like this is just a case of premature optimization. Let's drop the flag and do the calls unconditionally until it's proven that we need more smarts here.
* Refactor to reduce code duplication for function property checking.Tom Lane2016-06-10
| | | | | | | | | | | | | | | | | | | | | | | As noted by Andres Freund, we'd accumulated quite a few similar functions in clauses.c that examine all functions in an expression tree to see if they satisfy some boolean test. Reduce the duplication by inventing a function check_functions_in_node() that applies a simple callback function to each SQL function OID appearing in a given expression node. This also fixes some arguable oversights; for example, contain_mutable_functions() did not check aggregate or window functions for mutability. I doubt that that represents a live bug at the moment, because we don't really consider mutability for aggregates; but it might someday be one. I chose to put check_functions_in_node() in nodeFuncs.c because it seemed like other modules might wish to use it in future. That in turn forced moving set_opfuncid() et al into nodeFuncs.c, as the alternative was for nodeFuncs.c to depend on optimizer/setrefs.c which didn't seem very clean. In passing, teach contain_leaked_vars_walker() about a few more expression node types it can safely look through, and improve the rather messy and undercommented code in has_parallel_hazard_walker(). Discussion: <20160527185853.ziol2os2zskahl7v@alap3.anarazel.de>
* Rename local variable for consistency.Kevin Grittner2016-06-10
| | | | Pointed out by Robert Haas.
* Fix interaction between CREATE INDEX and "snapshot too old".Kevin Grittner2016-06-10
| | | | | | | | | | | | | | | Since indexes are created without valid LSNs, an index created while a snapshot older than old_snapshot_threshold existed could cause queries to return incorrect results when those old snapshots were used, if any relevant rows had been subject to early pruning before the index was built. Prevent usage of a newly created index until all such snapshots are released, for relations where this can happen. Questions about the interaction of "snapshot too old" with index creation were initially raised by Andres Freund. Reviewed by Robert Haas.
* Improve the situation for parallel query versus temp relations.Tom Lane2016-06-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transmit the leader's temp-namespace state to workers. This is important because without it, the workers do not really have the same search path as the leader. For example, there is no good reason (and no extant code either) to prevent a worker from executing a temp function that the leader created previously; but as things stood it would fail to find the temp function, and then either fail or execute the wrong function entirely. We still prohibit a worker from creating a temp namespace on its own. In effect, a worker can only see the session's temp namespace if the leader had created it before starting the worker, which seems like the right semantics. Also, transmit the leader's BackendId to workers, and arrange for workers to use that when determining the physical file path of a temp relation belonging to their session. While the original intent was to prevent such accesses entirely, there were a number of holes in that, notably in places like dbsize.c which assume they can safely access temp rels of other sessions anyway. We might as well get this right, as a small down payment on someday allowing workers to access the leader's temp tables. (With this change, directly using "MyBackendId" as a relation or buffer backend ID is deprecated; you should use BackendIdForTempRelations() instead. I left a couple of such uses alone though, as they're not going to be reachable in parallel workers until we do something about localbuf.c.) Move the thou-shalt-not-access-thy-leader's-temp-tables prohibition down into localbuf.c, which is where it actually matters, instead of having it in relation_open(). This amounts to recognizing that access to temp tables' catalog entries is perfectly safe in a worker, it's only the data in local buffers that is problematic. Having done all that, we can get rid of the test in has_parallel_hazard() that says that use of a temp table's rowtype is unsafe in parallel workers. That test was unduly expensive, and if we really did need such a prohibition, that was not even close to being a bulletproof guard for it. (For example, any user-defined function executed in a parallel worker might have attempted such access.)
* Repair a bit of pgindent damage.Robert Haas2016-06-09
| | | | | Inserting line-breaks into the middle of a URL is, to put it mildly, not very helpful, so persuade pgindent to leave it alone.
* pgindent run for 9.6Robert Haas2016-06-09
|
* Don't generate parallel paths for rels with parallel-restricted outputs.Robert Haas2016-06-09
| | | | | | | | | | | Such paths are unsafe. To make it cheaper to detect when this case applies, track whether a relation's default PathTarget contains any non-Vars. In most cases, the answer will be no, which enables us to determine cheaply that the target list for a proposed path is parallel-safe. However, subquery pull-up can create cases that require us to inspect the target list more carefully. Amit Kapila, reviewed by me.
* Yet again update typedefs.list file in preparation for pgindent runRobert Haas2016-06-09
| | | | Because the run was delayed, the file had a chance to get out of date.
* Clarify documentation of ceil/ceiling/floor functions.Tom Lane2016-06-09
| | | | | | | | | | | | Document these as "nearest integer >= argument" and "nearest integer <= argument", which will hopefully be less confusing than the old formulation. New wording is from Matlab via Dean Rasheed. I changed the pg_description entries as well as the SGML docs. In the back branches, this will only affect installations initdb'd in the future, but it should be harmless otherwise. Discussion: <CAEZATCW3yzJo-NMSiQs5jXNFbTsCEftZS-Og8=FvFdiU+kYuSA@mail.gmail.com>
* Mop-up for parallel degree-ectomy.Tom Lane2016-06-09
| | | | | | Fix a couple of overlooked uses of "degree" terminology. Make the parallel worker count selection logic in create_plain_partial_paths more robust (in particular, it failed with max_parallel_workers_per_gather set to zero).
* Eliminate "parallel degree" terminology.Robert Haas2016-06-09
| | | | | | | | | | | | This terminology provoked widespread complaints. So, instead, rename the GUC max_parallel_degree to max_parallel_workers_per_gather (leaving room for a possible future GUC max_parallel_workers that acts as a system-wide limit), and rename the parallel_degree reloption to parallel_workers. Rename structure members to match. These changes create a dump/restore hazard for users of PostgreSQL 9.6beta1 who have set the reloption (or applied the GUC using ALTER USER or ALTER DATABASE).
* Test parallel query essentials in "make check".Noah Misch2016-06-07
| | | | Clément Prévost and Peter Eisentraut
* Make psql_crosstab plans more stableAlvaro Herrera2016-06-07
| | | | | | | | | To achieve this, ANALYZE the data table before querying it, as suggested by Tom Lane. On my system, this enables the test to pass with 128 kB of work_mem (a value with which other tests fail -- so it seems good enough). Reported by Michaël Paquier.
* nls-global.mk: search build dir for source files, tooAlvaro Herrera2016-06-07
| | | | | | | | | | In VPATH builds, the build directory was not being searched for files in GETTEXT_FILES, leading to failure to construct the .pot files. This has bit me all along, but never hard enough to get it fixed; I suppose not a lot of people uses VPATH and NLS-enabled builds, and those that do, don't do "make update-po" often. This is a longstanding problem, so backpatch all the way back.
* Add missing translate_columns array entryAlvaro Herrera2016-06-07
| | | | This omission caused an assertion error in \dA+.
* Fix loose ends for SQL ACCESS METHOD objectsAlvaro Herrera2016-06-07
| | | | | | | | | | | | | COMMENT ON ACCESS METHOD was missing; add it, along psql tab-completion support for it. psql was also missing a way to list existing access methods; the new \dA command does that. Also add tab-completion support for DROP ACCESS METHOD. Author: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAB7nPqTzdZdu8J7EF8SXr_R2U5bSUUYNOT3oAWBZdEoggnwhGA@mail.gmail.com
* Revert "Use Foreign Key relationships to infer multi-column join selectivity".Tom Lane2016-06-07
| | | | | | | | | | | | | | This commit reverts 137805f89 as well as the associated commits 015e88942, 5306df283, and 68d704edb. We found multiple bugs in this feature, and there was concern about possible planner slowdown (though to be fair, exhibiting a very large slowdown proved difficult). The way forward requires a considerable rewrite, which may or may not be possible to accomplish in time for beta2. In my judgment reviewing the rewrite will be easier to accomplish starting from a clean slate, so let's temporarily revert what's there now. This also leaves us in a safe state if it turns out to be necessary to postpone the rewrite to the next development cycle. Discussion: <20160429102531.GA13701@huehner.biz>
* Message style and wording fixesPeter Eisentraut2016-06-07
|
* Correct phrasing in dsm.c commentsSimon Riggs2016-06-07
|
* Minor typos / copy-editing for snapmgr.cStephen Frost2016-06-07
| | | | Noticed while reviewing snapshot management.
* psql: Add missing file to nls.mkPeter Eisentraut2016-06-07
| | | | | | crosstabview.c was not added to nls.mk when it was added. Also remove redundant gettext markers, since psql_error() is already registered as a gettext keyword.
* pgbench: Fix order in --help outputPeter Eisentraut2016-06-07
| | | | | The new option --progress-timestamp was just added at the end. Put it in alphabetical order.
* pg_dump only selected components of ACCESS METHODsStephen Frost2016-06-07
| | | | | | | | | | | | | | | | | | dumpAccessMethod() didn't get the memo that we now have a bitfield for the components which should be dumped instead of a simple boolean. Correct that by checking if the relevant bit is set for each component being dumped out (and not dumping it out if it isn't set). This corrects an issue where CREATE ACCESS METHOD commands were being included in non-binary-upgrades when an extension included an access method (as the bloom extensions does). Also add a regression test to make sure that we only dump out the ACCESS METHOD commands, when they are part of an extension, when doing a binary upgrade. Pointed out by Thom Brown.
* PL/Python: Move ereport wrapper test cases to separate filePeter Eisentraut2016-06-07
| | | | | | | In commit 5c3c3cd0a3046339597a03bc708cb5530dc07059, the new tests were apparently just dumped into the first convenient file. Move them to a separate file dedicated to testing that functionality and leave the plpython_test test to test basic functionality, as it did before.
* Don't reset changes_since_analyze after a selective-columns ANALYZE.Tom Lane2016-06-06
| | | | | | | | | | | | If we ANALYZE only selected columns of a table, we should not postpone auto-analyze because of that; other columns may well still need stats updates. As committed, the counter is left alone if a column list is given, whether or not it includes all analyzable columns of the table. Per complaint from Tomasz Ostrowski. It's been like this a long time, so back-patch to all supported branches. Report: <ef99c1bd-ff60-5f32-2733-c7b504eb960c@ato.waw.pl>
* Stop the executor if no more tuples can be sent from worker to leader.Robert Haas2016-06-06
| | | | | | | | | | | | | | | | | | | | | If a Gather node has read as many tuples as it needs (for example, due to Limit) it may detach the queue connecting it to the worker before reading all of the worker's tuples. Rather than let the worker continue to generate and send all of the results, have it stop after sending the next tuple. More could be done here to stop the worker even quicker, but this is about as well as we can hope to do for 9.6. This is in response to a problem report from Andreas Seltenreich. Commit 44339b892a04e94bbb472235882dc6f7023bdc65 should be actually be sufficient to fix that example even without this change, but it seems better to do this, too, since we might otherwise waste quite a large amount of effort in one or more workers. Discussion: CAA4eK1KOKGqmz9bGu+Z42qhRwMbm4R5rfnqsLCNqFs9j14jzEA@mail.gmail.com Amit Kapila
* shm_mq: After a send fails with SHM_MQ_DETACHED, later ones should too.Robert Haas2016-06-06
| | | | | | | | | | | | | | | | Prior to this patch, it was occasionally possible, after shm_mq_sendv had previously returned SHM_MQ_DETACHED, for a later shm_mq_sendv operation to fail an assertion instead of just again returning SHM_MQ_ATTACHED. From the shm_mq code's point of view, it was expecting to be called again with the same arguments, since the previous operation had only partially completed. However, a caller who isn't using non-blocking mode won't be prepared to repeat the call with the same arguments, and this code shouldn't expect that they will. Repair in such a way that we'll be OK whether the next call uses the same arguments or not. Found by Andreas Seltenreich. Analysis and sketch of fix by Amit Kapila. Patch by me, reviewed by Amit Kapila.
* pg_upgrade: Don't overwrite existing files.Robert Haas2016-06-06
| | | | | | | | | | | | | | | | | | For historical reasons, copyFile and rewriteVisibilityMap took a force argument which was always passed as true, meaning that any existing file should be overwritten. However, it seems much safer to instead fail if a file we need to write already exists. While we're at it, remove the "force" argument altogether, since it was never passed as anything other than true (and now we would never pass it as anything other than false, if we kept it). Noted by Andres Freund during post-commit review of the patch that added rewriteVisibilityMap, commit 7087166a88fe0c04fc6636d0d6d6bea1737fc1fb, but this also changes the behavior when copying files without rewriting them. Patch by Masahiko Sawada.
* Fix typo.Robert Haas2016-06-06
| | | | Jim Nasby
* pg_upgrade: Improve error checking in rewriteVisibilityMap.Robert Haas2016-06-06
| | | | | | | | | In the old logic, if read() were to return an error, we'd silently stop rewriting the visibility map at that point in the file. That's safe, but reporting the error is better, so do that instead. Report by Andres Freund. Patch by Masahiko Sawada, with one correction by me.
* Fix whitespacePeter Eisentraut2016-06-05
|
* Properly initialize SortSupport for ORDER BY rechecks in nodeIndexscan.c.Tom Lane2016-06-05
| | | | | | | | | | | | | | | | | | Fix still another bug in commit 35fcb1b3d: it failed to fully initialize the SortSupport states it introduced to allow the executor to re-check ORDER BY expressions containing distance operators. That led to a null pointer dereference if the sortsupport code tried to use ssup_cxt. The problem only manifests in narrow cases, explaining the lack of previous field reports. It requires a GiST-indexable distance operator that lacks SortSupport and is on a pass-by-ref data type, which among core+contrib seems to be only btree_gist's interval opclass; and it requires the scan to be done as an IndexScan not an IndexOnlyScan, which explains how btree_gist's regression test didn't catch it. Per bug #14134 from Jihyun Yu. Peter Geoghegan Report: <20160511154904.2603.43889@wrigleys.postgresql.org>
* Fix grammar's AND/OR flattening to work with operator_precedence_warning.Tom Lane2016-06-03
| | | | | | | | | | | | | It'd be good for "(x AND y) AND z" to produce a three-child AND node whether or not operator_precedence_warning is on, but that failed to happen when it's on because makeAndExpr() didn't look through the added AEXPR_PAREN node. This has no effect on generated plans because prepqual.c would flatten the AND nest anyway; but it does affect the number of parens printed in ruleutils.c, for example. I'd already fixed some similar hazards in parse_expr.c in commit abb164655, but didn't think to search gram.y for problems of this ilk. Per gripe from Jean-Pierre Pelletier. Report: <fa0535ec6d6428cfec40c7e8a6d11156@mail.gmail.com>
* Inline the easy cases in MakeExpandedObjectReadOnly().Tom Lane2016-06-03
| | | | | | | | | | | | | | This attempts to buy back some of whatever performance we lost from fixing bug #14174 by inlining the initial checks in MakeExpandedObjectReadOnly() into the callers. We can do that in a macro without creating multiple- evaluation hazards, so it's pretty much free notationally; and the amount of code added to callers should be minimal as well. (Testing a value can't take many more instructions than passing it to a subroutine.) Might as well inline DatumIsReadWriteExpandedObject() while we're at it. This is an ABI break for callers, so it doesn't seem safe to put into 9.5, but I see no reason not to do it in HEAD.
* Mark read/write expanded values as read-only in ValuesNext(), too.Tom Lane2016-06-03
| | | | | | | | | | | | | | | | | Further thought about bug #14174 motivated me to try the case of a R/W datum being returned from a VALUES list, and sure enough it was broken. Fix that. Also add a regression test case exercising the same scenario for FunctionScan. That's not broken right now, because the function's result will get shoved into a tuplestore between generation and use; but it could easily become broken whenever we get around to optimizing FunctionScan better. There don't seem to be any other places where we put the result of expression evaluation into a virtual tuple slot that could then be the source for Vars of further expression evaluation, so I think this is the end of this bug.
* Mark read/write expanded values as read-only in ExecProject().Tom Lane2016-06-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a plan node output expression returns an "expanded" datum, and that output column is referenced in more than one place in upper-level plan nodes, we need to ensure that what is returned is a read-only reference not a read/write reference. Otherwise one of the referencing sites could scribble on or even delete the expanded datum before we have evaluated the others. Commit 1dc5ebc9077ab742, which introduced this feature, supposed that it'd be sufficient to make SubqueryScan nodes force their output columns to read-only state. The folly of that was revealed by bug #14174 from Andrew Gierth, and really should have been immediately obvious considering that the planner will happily optimize SubqueryScan nodes out of the plan without any regard for this issue. The safest fix seems to be to make ExecProject() force its results into read-only state; that will cover every case where a plan node returns expression results. Actually we can delegate this to ExecTargetList() since we can recursively assume that plain Vars will not reference read-write datums. That should keep the extra overhead down to something minimal. We no longer need ExecMakeSlotContentsReadOnly(), which was introduced only in support of the idea that just a few plan node types would need to do this. In the future it would be nice to have the planner account for this problem and inject force-to-read-only expression evaluation nodes into only the places where there's a risk of multiple evaluation. That's not a suitable solution for 9.5 or even 9.6 at this point, though. Report: <20160603124628.9932.41279@wrigleys.postgresql.org>
* Remove bogus code to apply PathTargets to partial paths.Robert Haas2016-06-03
| | | | | | | | | | | | | | | | | The partial paths that get modified may already have been used as part of a GatherPath which appears in the path list, so modifying them is not a good idea at this stage - especially because this code has no check that the PathTarget is in fact parallel-safe. When partial aggregation is being performed, this is actually harmless because we'll end up replacing the pathtargets here with the correct ones within create_grouping_paths(). But if we've got a query tree containing only scan/join operations then this can result in incorrectly pushing down parallel-restricted target list entries. If those are, for example, references to subqueries, that can crash the server; but it's wrong in any event. Amit Kapila
* Mark PostmasterPid as PGDLLIMPORT.Robert Haas2016-06-03
| | | | | | This is so that extensions can use it. Michael Paquier
* Add new snapshot fields to serialize/deserialize functions.Kevin Grittner2016-06-03
| | | | | | | The "snapshot too old" condition was not being recognized when using a copied snapshot, since the original timestamp and lsn were not being passed along. Noticed when testing the combination of "snapshot too old" with parallel query execution.
* Fix comment to be more accurate.Robert Haas2016-06-03
| | | | | | | Now that we skip vacuuming all-frozen pages, this comment needs updating. Masahiko Sawada
* Suppress -Wunused-result warnings about write(), again.Tom Lane2016-06-03
| | | | | | | | | Adopt the same solution as in commit aa90e148ca70a235, but this time let's put the ugliness inside the write_stderr() macro, instead of expecting each call site to deal with it. Back-port that decision into psql/common.c where I got the macro from in the first place. Per gripe from Peter Eisentraut.
* Fix various common mispellings.Greg Stark2016-06-03
| | | | | | | | | | Mostly these are just comments but there are a few in documentation and a handful in code and tests. Hopefully this doesn't cause too much unnecessary pain for backpatching. I relented from some of the most common like "thru" for that reason. The rest don't seem numerous enough to cause problems. Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings