aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Code review for early drop of orphaned temp relations in autovacuum.Tom Lane2016-11-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a734fd5d1 exposed some race conditions that existed previously in the autovac code, but were basically harmless because autovac would not try to delete orphaned relations immediately. Specifically, the test for orphaned-ness was made on a pg_class tuple that might be dead by now, allowing autovac to try to remove a table that the owning backend had just finished deleting. This resulted in a hard crash due to inadequate caution about accessing the table's catalog entries without any lock. We must take a relation lock and then recheck whether the table is still present and still looks deletable before we do anything. Also, it seemed to me that deleting multiple tables per transaction, and trying to continue after errors, represented unjustifiable complexity. We do not expect this code path to be taken often in the field, nor even during testing, which means that prioritizing performance over correctness is a bad tradeoff. Rip all that out in favor of just starting a new transaction after each successful temp table deletion. If we're unlucky enough to get an error, which shouldn't happen anyway now that we're being more cautious, let the autovacuum worker fail as it normally would. In passing, improve the order of operations in the initial scan loop. Now that we don't care about whether a temp table is a wraparound hazard, there's no need to perform extract_autovac_opts, get_pgstat_tabentry_relid, or relation_needs_vacanalyze for temp tables. Also, if GetTempNamespaceBackendId returns InvalidBackendId (indicating it doesn't recognize the schema as temp), treat that as meaning it's NOT an orphaned temp table, not that it IS one, which is what happened before because BackendIdGetProc necessarily failed. The case really shouldn't come up for a table that has RELPERSISTENCE_TEMP, but the consequences if it did seem undesirable. (This might represent a back-patchable bug fix; not sure if it's worth the trouble.) Discussion: https://postgr.es/m/21299.1480272347@sss.pgh.pa.us
* Fix test about ignoring extension dependencies during extension scripts.Tom Lane2016-11-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 08dd23cec introduced an exception to the rule that extension member objects can only be dropped as part of dropping the whole extension, intending to allow such drops while running the extension's own creation or update scripts. However, the exception was only applied at the outermost recursion level, because it was modeled on a pre-existing check to ignore dependencies on objects listed in pendingObjects. Bug #14434 from Philippe Beaudoin shows that this is inadequate: in some cases we can reach an extension member object by recursion from another one. (The bug concerns the serial-sequence case; I'm not sure if there are other cases, but there might well be.) To fix, revert 08dd23cec's changes to findDependentObjects() and instead apply the creating_extension exception regardless of stack level. Having seen this example, I'm a bit suspicious that the pendingObjects logic is also wrong and such cases should likewise be allowed at any recursion level. However, changing that would interact in subtle ways with the recursion logic (at least it would need to be moved to after the recursing-from check). Given that the code's been like that a long time, I'll refrain from touching it without a clear example showing it's wrong. Back-patch to all active branches. In HEAD and 9.6, where suitable test infrastructure exists, add a regression test case based on the bug report. Report: <20161125151448.6529.33039@wrigleys.postgresql.org> Discussion: <13224.1480177514@sss.pgh.pa.us>
* Bring some clarity to the defaults for the xxx_flush_after parameters.Tom Lane2016-11-25
| | | | | | | | | | | | | | | | | | | | | | | | | | Instead of confusingly stating platform-dependent defaults for these parameters in the comments in postgresql.conf.sample (with the main entry being a lie on Linux), teach initdb to install the correct platform-dependent value in postgresql.conf, similarly to the way we handle other platform-dependent defaults. This won't do anything for existing 9.6 installations, but since it's effectively only a documentation improvement, that seems OK. Since this requires initdb to have access to the default values, move the #define's for those to pg_config_manual.h; the original placement in bufmgr.h is unworkable because that file can't be included by frontend programs. Adjust the default value for wal_writer_flush_after so that it is 1MB regardless of XLOG_BLCKSZ, conforming to what is stated in both the SGML docs and postgresql.conf. (We could alternatively make it scale with XLOG_BLCKSZ, but I'm not sure I see the point.) Copy-edit related SGML documentation. Fabien Coelho and Tom Lane, per a gripe from Tomas Vondra. Discussion: <30ebc6e3-8358-09cf-44a8-578252938424@2ndquadrant.com>
* Mark a query's topmost Paths parallel-unsafe if they will have initPlans.Tom Lane2016-11-25
| | | | | | | | | | | | | Andreas Seltenreich found another case where we were being too optimistic about allowing a plan to be considered parallelizable despite it containing initPlans. It seems like the real issue here is that if we know we are going to tack initPlans onto the topmost Plan node for a subquery, we had better mark that subquery's result Paths as not-parallel-safe. That fixes this problem and allows reversion of a kluge (added in commit 7b67a0a49 and extended in f24cf960d) to not trust the parallel_safe flag at top level. Discussion: <874m2w4k5d.fsf@ex.ansel.ydns.eu>
* Check for pending trigger events on far end when dropping an FK constraint.Tom Lane2016-11-25
| | | | | | | | | | | | | When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT, we refuse the drop if there are any pending trigger events on the named table; this ensures that we won't remove the pg_trigger row that will be consulted by those events. But we should make the same check for the referenced relation, else we might remove a due-to-be-referenced pg_trigger row for that relation too, resulting in "could not find trigger NNN" or "relation NNN has no triggers" errors at commit. Per bug #14431 from Benjie Gillam. Back-patch to all supported branches. Report: <20161124114911.6530.31200@wrigleys.postgresql.org>
* Fix typo in commentMagnus Hagander2016-11-25
| | | | Thomas Munro
* Fix commit_ts for FrozenXid and BootstrapXidAlvaro Herrera2016-11-24
| | | | | | | | | | | | | | | | Previously, requesting commit timestamp for transactions FrozenTransactionId and BootstrapTransactionId resulted in an error. But since those values can validly appear in committed tuples' Xmin, this behavior is unhelpful and error prone: each caller would have to special-case those values before requesting timestamp data for an Xid. We already have a perfectly good interface for returning "the Xid you requested is too old for us to have commit TS data for it", so let's use that instead. Backpatch to 9.5, where commit timestamps appeared. Author: Craig Ringer Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com
* Avoid masking a function parameter name with a local variable name.Tom Lane2016-11-23
| | | | | | | No actual bug here, but it might confuse readers, so change the name of the local variable. Ashutosh Bapat
* Make sure ALTER TABLE preserves index tablespaces.Tom Lane2016-11-23
| | | | | | | | | | | | | | | | | | | | | | | | When rebuilding an existing index, ALTER TABLE correctly kept the physical file in the same tablespace, but it messed up the pg_class entry if the index had been in the database's default tablespace and "default_tablespace" was set to some non-default tablespace. This led to an inaccessible index. Fix by fixing pg_get_indexdef_string() to always include a tablespace clause, whether or not the index is in the default tablespace. The previous behavior was installed in commit 537e92e41, and I think it just wasn't thought through very clearly; certainly the possible effect of default_tablespace wasn't considered. There's some risk in changing the behavior of this function, but there are no other call sites in the core code. Even if it's being used by some third party extension, it's fairly hard to envision a usage that is okay with a tablespace clause being appended some of the time but can't handle it being appended all the time. Back-patch to all supported versions. Code fix by me, investigation and test cases by Michael Paquier. Discussion: <1479294998857-5930602.post@n3.nabble.com>
* Remove barrier.hRobert Haas2016-11-22
| | | | | | | | A new thing also called a "barrier" is proposed, but whether we decide to take that patch or not, this file seems to have outlived its usefulness. Thomas Munro
* Improve handling of "UPDATE ... SET (column_list) = row_constructor".Tom Lane2016-11-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the right-hand side of a multiple-column assignment, if it wasn't a sub-SELECT, had to be a simple parenthesized expression list, because gram.y was responsible for "bursting" the construct into independent column assignments. This had the minor defect that you couldn't write ROW (though you should be able to, since the standard says this is a row constructor), and the rather larger defect that unlike other uses of row constructors, we would not expand a "foo.*" item into multiple columns. Fix that by changing the RHS to be just "a_expr" in the grammar, leaving it to transformMultiAssignRef to separate the elements of a RowExpr; which it will do only after performing standard transformation of the RowExpr, so that "foo.*" behaves as expected. The key reason we didn't do that before was the hard-wired handling of DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by allowing DEFAULT in any a_expr and having parse analysis throw an error if SetToDefault is found in an unexpected place. That's an improvement anyway since the error can be more specific than just "syntax error". The SQL standard suggests that the RHS could be any a_expr yielding a suitable row value. This patch doesn't really move the goal posts in that respect --- you're still limited to RowExpr or a sub-SELECT --- but it does fix the grammar restriction, so it provides some tangible progress towards a full implementation. And the limitation is now documented by an explicit error message rather than an unhelpful "syntax error". Discussion: <8542.1479742008@sss.pgh.pa.us>
* Support condition variables.Robert Haas2016-11-22
| | | | | | | | | | | | | | | | | Condition variables provide a flexible way to sleep until a cooperating process causes an arbitrary condition to become true. In simple cases, this can be accomplished with a WaitLatch/ResetLatch loop; the cooperating process can call SetLatch after performing work that might cause the condition to be satisfied, and the waiting process can recheck the condition each time. However, if the process performing the work doesn't have an easy way to identify which processes might be waiting, this doesn't work, because it can't identify which latches to set. Condition variables solve that problem by internally maintaining a list of waiters; a process that may have caused some waiter's condition to be satisfied must "signal" or "broadcast" on the condition variable. Robert Haas and Thomas Munro
* Fix uninitialized variable.Tom Lane2016-11-21
| | | | | | Oversight in a734fd5d1. Michael Paquier
* Fix PGLC_localeconv() to handle errors better.Tom Lane2016-11-21
| | | | | | | | | | | | | | | | | | | | | | | The code was intentionally not very careful about leaking strdup'd strings in case of an error. That was forgivable probably, but it also failed to notice strdup() failures, which could lead to subsequent null-pointer-dereference crashes, since many callers unsurprisingly didn't check for null pointers in the struct lconv fields. An even worse problem is that it could throw error while we were setlocale'd to a non-C locale, causing unwanted behavior in subsequent libc calls. Rewrite to ensure that we cannot throw elog(ERROR) until after we've restored the previous locale settings, or at least attempted to. (I'm sorely tempted to make restore failure be a FATAL error, but will refrain for the moment.) Having done that, it's not much more work to ensure that we clean up strdup'd storage on the way out, too. This code is substantially the same in all supported branches, so back-patch all the way. Michael Paquier and Tom Lane Discussion: <CAB7nPqRMbGqa_mesopcn4MPyTs34eqtVEK7ELYxvvV=oqS00YA@mail.gmail.com>
* Fix optimization for skipping searches for parallel-query hazards.Tom Lane2016-11-21
| | | | | | | | | | Fix thinko in commit da1c91631: even if the original query was free of parallel hazards, we might introduce such a hazard by adding PARAM_EXEC Param nodes. Adjust is_parallel_safe() so that it will scan the given expression whenever any such nodes have been created. Per report from Andreas Seltenreich. Discussion: <878tse6yvf.fsf@credativ.de>
* autovacuum: Drop orphan temp tables more quickly but with more caution.Robert Haas2016-11-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we only dropped an orphan temp table when it became old enough to threaten wraparound; instead, doing it immediately. The only value of waiting is that someone might be able to examine the contents of the orphan temp table for forensic purposes, but it's pretty difficult to actually do that and few users will wish to do so. On the flip side, not performing the drop immediately generates log spam and bloats pg_class. In addition, per a report from Grigory Smolkin, if a temporary schema contains a very large number of temporary tables, a backend attempting to clear the temporary schema might fail due to lock table exhaustion. It's helpful for autovacuum to clean up after such cases, and we don't want it to wait for wraparound to threaten before doing so. To prevent autovacuum from failing in the same manner as a backend trying to drop an entire temp schema, remove orphan temp tables in batches of 50, committing after each batch, so that we don't accumulate an unbounded number of locks. If a drop fails, retry other orphan tables that need to be dropped up to 10 times before giving up. With this system, if a backend does fail to clean a temporary schema due to lock table exhaustion, autovacuum should hopefully put things right the next time it processes the database. Discussion: CAB7nPqSbYT6dRwsXVgiKmBdL_ARemfDZMPA+RPeC_ge0GK70hA@mail.gmail.com Michael Paquier, with a bunch of comment changes by me.
* Fix test for subplans in force-parallel mode.Tom Lane2016-11-21
| | | | | | | | | | | | | We mustn't force parallel mode if the query has any subplans, since ExecSerializePlan doesn't transmit them to workers. Testing top_plan->initPlan is inadequate because (1) there might be initPlans attached to lower plan nodes, and (2) non-initPlan subplans don't work either. There's certainly room for improvement in those restrictions, but for the moment that's what we've got. Amit Kapila, per report from Andreas Seltenreich Discussion: <8737im6pmh.fsf@credativ.de>
* Prevent multicolumn expansion of "foo.*" in an UPDATE source expression.Tom Lane2016-11-20
| | | | | | | | | | | | | | | | | | | | | | | | Because we use transformTargetList() for UPDATE as well as SELECT tlists, the code accidentally tried to expand a "*" reference into several columns. This is nonsensical, because the UPDATE syntax provides exactly one target column to put the value into. The immediate result was that transformUpdateTargetList() got confused and reported "UPDATE target count mismatch --- internal error". It seems better to treat such a reference as a plain whole-row variable, as it would be in other contexts. (This could produce useful results when the target column is of composite type.) Fix by tweaking transformTargetList() to perform *-expansion only conditionally, depending on its exprKind parameter. Back-patch to 9.3. The problem exists further back, but a fix would be much more invasive before that, because transformTargetList() wasn't told what kind of list it was working on. Doesn't seem worth the trouble given the lack of field reports. (I only noticed it because I was checking the code while trying to improve the documentation about how we handle "foo.*".) Discussion: <4308.1479595330@sss.pgh.pa.us>
* Fix latent costing error in create_merge_append_path.Tom Lane2016-11-19
| | | | | | | | | | | | | | create_merge_append_path should use the path rowcount it just computed, not rel->tuples, for costing purposes. Those numbers should always be the same at present, but if we ever support parameterized MergeAppend paths (a case this function is otherwise prepared for), the former would be right and the latter wrong. No need for back-patch since the problem is only latent. Ashutosh Bapat Discussion: <CAFjFpRek+cLCnTo24youuGtsq4zRphEB8EUUPjDxZjnL4n4HYQ@mail.gmail.com>
* Code review for GUC serialization/deserialization code.Tom Lane2016-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | The serialization code dumped core for a string-valued GUC whose value is NULL, which is a legal state. The infrastructure isn't capable of transmitting that state exactly, but fortunately, transmitting an empty string instead should be close enough (compare, eg, commit e45e990e4). The code potentially underestimated the space required to format a real-valued variable, both because it made an unwarranted assumption that %g output would never be longer than %e output, and because it didn't count right even for %e format. In practice this would pretty much always be masked by overestimates for other variables, but it's still wrong. Also fix boundary-case error in read_gucstate, incorrect handling of the case where guc_sourcefile is non-NULL but zero length (not clear that can happen, but if it did, this code would get totally confused), and confusingly useless check for a NULL result from read_gucstate. Andreas Seltenreich discovered the core dump; other issues noted while reading nearby code. Back-patch to 9.5 where this code was introduced. Michael Paquier and Tom Lane Discussion: <871sy78wno.fsf@credativ.de>
* Add pg_sequences viewPeter Eisentraut2016-11-18
| | | | | | | | | | | | | Like pg_tables, pg_views, and others, this view contains information about sequences in a way that is independent of the system catalog layout but more comprehensive than the information schema. To help implement the view, add a new internal function pg_sequence_last_value() to return the last value of a sequence. This is kept separate from pg_sequence_parameters() to separate querying run-time state from catalog-like information. Reviewed-by: Andreas Karlsson <andreas@proxel.se>
* Remove or reduce verbosity of some debug messages.Robert Haas2016-11-17
| | | | | | | | | | | | | | | | | | | The debug messages that merely print StartTransactionCommand, CommitTransactionCommand, ProcessUtilty, or ProcessQuery with no additional details seem to be useless. Get rid of them. The transaction status messages produced by ShowTransactionState are occasionally useful, but they are extremely verbose, producing multiple lines of log output every time they fire, which can happens multiple times per transaction. So, reduce the level to DEBUG5; avoid emitting an extra line just to explain which debug point is at issue; and tighten up the rest of the message so it doesn't use quite so much horizontal space. With these changes, it's possible to run a somewhat busy system with a log level even as high as DEBUG4, whereas previously anything above DEBUG2 would flood the log with output that probably wasn't really all that useful.
* Avoid pin scan for replay of XLOG_BTREE_VACUUM in all casesAlvaro Herrera2016-11-17
| | | | | | | | | | | | | | | | | | | | | | | Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require complex interlocking that matched the requirements on the master. This required an O(N) operation that became a significant problem with large indexes, causing replication delays of seconds or in some cases minutes while the XLOG_BTREE_VACUUM was replayed. This commit skips the “pin scan” that was previously required, by observing in detail when and how it is safe to do so, with full documentation. The pin scan is skipped only in replay; the VACUUM code path on master is not touched here. No tests included. Manual tests using an additional patch to view WAL records and their timing have shown the change in WAL records and their handling has successfully reduced replication delay. This is a back-patch of commits 687f2cd7a015, 3e4b7d87988f, b60284261375 by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is possible because this depends on catalog scans being MVCC. I (Álvaro) additionally updated a slight problem in the README, which explains why this touches the 9.6 and master branches.
* Reserve zero as an invalid DSM handle.Robert Haas2016-11-15
| | | | | | | | | | | | | | Previously, the handle for the control segment could not be zero, but some other DSM segment could potentially have a handle value of zero. However, that means that if someone wanted to store a dsm_handle that might or might not be valid, they would need a separate boolean to keep track of whether the associated value is legal. That's annoying, so change things so that no DSM segment can ever have a handle of 0 - or as we call it here, DSM_HANDLE_INVALID. Thomas Munro. This was submitted as part of a much larger patch to add an malloc-like allocator for dynamic shared memory, but this part seems like a good idea independently of the rest of the patch.
* Account for catalog snapshot in PGXACT->xmin updates.Tom Lane2016-11-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting for whether MyPgXact->xmin could be cleared or advanced. In normal transactions this was masked by the fact that the transaction snapshot would be older, but during backend startup and certain utility commands it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had been cleared, meaning that recently-deleted rows could be pruned even though this snapshot could still see them, causing unexpected catalog lookup failures. This effect appears to be the explanation for a recent failure on buildfarm member piculet. To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever it is valid. In the previous logic, it was possible for the CatalogSnapshot to remain valid across waits for client input, but with this change that would mean it delays advance of global xmin in cases where it did not before. To avoid possibly causing new table-bloat problems with clients that sit idle for long intervals, add code to invalidate the CatalogSnapshot before waiting for client input. (When the backend is busy, it's unlikely that the CatalogSnapshot would be the oldest snap for very long, so we don't worry about forcing early invalidation of it otherwise.) In passing, remove the CatalogSnapshotStale flag in favor of using "CatalogSnapshot != NULL" to represent validity, as we do for the other special snapshots in snapmgr.c. And improve some obsolete comments. No regression test because I don't know a deterministic way to cause this failure. But the stress test shown in the original discussion provokes "cache lookup failed for relation 1255" within a few dozen seconds for me. Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's quite easy to produce similar failures with the same test case in branches before 9.4. But MVCC catalog scans were supposed to fix that.) Discussion: <16447.1478818294@sss.pgh.pa.us>
* Limit the number of number of tapes used for a sort to 501.Robert Haas2016-11-15
| | | | | | Gigantic numbers of tapes don't work out well. Original patch by Peter Geoghegan; comments entirely rewritten by me.
* Fix broken statement in UCS_to_most.pl.Robert Haas2016-11-15
| | | | | | | This has been wrong for a very long time, and it's puzzling to me how it ever worked for anyone. Kyotaro Horiguchi
* Fix typo in commentMagnus Hagander2016-11-14
| | | | | The function was renamed in 908e23473, but the comment never learned about it.
* Cleanup of rewriter and planner handling of Query.hasRowSecurity flag.Tom Lane2016-11-10
| | | | | | | | | | | | | | | | | | | | | | Be sure to pull up the subquery's hasRowSecurity flag when flattening a subquery in pull_up_simple_subquery(). This isn't a bug today because we don't look at the hasRowSecurity flag during planning, but it could easily be a bug tomorrow. Likewise, make rewriteRuleAction() pull up the hasRowSecurity flag when absorbing RTEs from a rule action. This isn't a bug either, for the opposite reason: the flag should never be set yet. But again, it seems like good future proofing. Add a comment explaining why rewriteTargetView() should *not* set hasRowSecurity when adding stuff to securityQuals. Improve some nearby comments about securityQuals processing, and document that field more completely in parsenodes.h. Patch by me, analysis by Dean Rasheed. Discussion: <CAEZATCXZ8tb2DV6f=bkhsMV6u_gRcZ0CZBw2J-qU84RxSukZog@mail.gmail.com>
* Support "COPY view FROM" for views with INSTEAD OF INSERT triggers.Tom Lane2016-11-10
| | | | | | | | We just pass the data to the INSTEAD trigger. Haribabu Kommi, reviewed by Dilip Kumar Patch: <CAJrrPGcSQkrNkO+4PhLm4B8UQQQmU9YVUuqmtgM=pmzMfxWaWQ@mail.gmail.com>
* Fix partial aggregation for the case of a degenerate GROUP BY clause.Tom Lane2016-11-10
| | | | | | | | | | | The plan generated for sorted partial aggregation with "GROUP BY constant" included a Sort node with no sort keys, which the executor does not like. Per report from Steve Randall. I'd add a regression test case if I could think of a compact one, but it doesn't seem worth expending lots of cycles on. Report: <CABVd52UAdGXpg_rCk46egpNKYdXOzCjuJ1zG26E2xBe_8bj+Fg@mail.gmail.com>
* Simplify code by getting rid of SPI_push, SPI_pop, SPI_restore_connection.Tom Lane2016-11-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The idea behind SPI_push was to allow transitioning back into an "unconnected" state when a SPI-using procedure calls unrelated code that might or might not invoke SPI. That sounds good, but in practice the only thing it does for us is to catch cases where a called SPI-using function forgets to call SPI_connect --- which is a highly improbable failure mode, since it would be exposed immediately by direct testing of said function. As against that, we've had multiple bugs induced by forgetting to call SPI_push/SPI_pop around code that might invoke SPI-using functions; these are much harder to catch and indeed have gone undetected for years in some cases. And we've had to band-aid around some problems of this ilk by introducing conditional push/pop pairs in some places, which really kind of defeats the purpose altogether; if we can't draw bright lines between connected and unconnected code, what's the point? Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the underlying state variable _SPI_curid. It turns out SPI_restore_connection can go away too, which is a nice side benefit since it was never more than a kluge. Provide no-op macros for the deleted functions so as to avoid an API break for external modules. A side effect of this removal is that SPI_palloc and allied functions no longer permit being called when unconnected; they'll throw an error instead. The apparent usefulness of the previous behavior was a mirage as well, because it was depended on by only a few places (which I fixed in preceding commits), and it posed a risk of allocations being unexpectedly long-lived if someone forgot a SPI_push call. Discussion: <20808.1478481403@sss.pgh.pa.us>
* Replace uses of SPI_modifytuple that intend to allocate in current context.Tom Lane2016-11-08
| | | | | | | | | | | | | | | | | | | | | | | | | Invent a new function heap_modify_tuple_by_cols() that is functionally equivalent to SPI_modifytuple except that it always allocates its result by simple palloc. I chose however to make the API details a bit more like heap_modify_tuple: pass a tupdesc rather than a Relation, and use bool convention for the isnull array. Use this function in place of SPI_modifytuple at all call sites where the intended behavior is to allocate in current context. (There actually are only two call sites left that depend on the old behavior, which makes me wonder if we should just drop this function rather than keep it.) This new function is easier to use than heap_modify_tuple() for purposes of replacing a single column (or, really, any fixed number of columns). There are a number of places where it would simplify the code to change over, but I resisted that temptation for the moment ... everywhere except in plpgsql's exec_assign_value(); changing that might offer some small performance benefit, so I did it. This is on the way to removing SPI_push/SPI_pop, but it seems like good code cleanup in its own right. Discussion: <9633.1478552022@sss.pgh.pa.us>
* Fix typo.Robert Haas2016-11-08
| | | | Michael Paquier
* Make SPI_fnumber() reject dropped columns.Tom Lane2016-11-08
| | | | | | | | | | | | | | | | | | | | | There's basically no scenario where it's sensible for this to match dropped columns, so put a test for dropped-ness into SPI_fnumber() itself, and excise the test from the small number of callers that were paying attention to the case. (Most weren't :-(.) In passing, normalize tests at call sites: always reject attnum <= 0 if we're disallowing system columns. Previously there was a mixture of "< 0" and "<= 0" tests. This makes no practical difference since SPI_fnumber() never returns 0, but I'm feeling pedantic today. Also, in the places that are actually live user-facing code and not legacy cruft, distinguish "column not found" from "can't handle system column". Per discussion with Jim Nasby; thi supersedes his original patch that just changed the behavior at one call site. Discussion: <b2de8258-c4c0-1cb8-7b97-e8538e5c975c@BlueTreble.com>
* Fix mistake in XLOG_SEG_SIZE test.Robert Haas2016-11-08
| | | | | | | | | | | The intent of the test is to check whether XLOG_SEG_SIZE is in a particular range, but actually in one case it compares XLOG_BLCKSZ by mistake. Repair. Commit 88e982302684246e8af785e78a467ac37c76dee9 introduced this faulty test. Kuntal Ghosh, reviewed by Michael Paquier.
* Improve handling of dead tuples in hash indexes.Robert Haas2016-11-08
| | | | | | | | | | | When squeezing a bucket during vacuum, it's not necessary to retain any tuples already marked as dead, so ignore them when deciding which tuples must be moved in order to empty a bucket page. Similarly, when splitting a bucket, relocating dead tuples to the new bucket is a waste of effort; instead, just ignore them. Amit Kapila, reviewed by me. Testing help provided by Ashutosh Sharma.
* Save redundant code for pseudotype I/O functionsPeter Eisentraut2016-11-07
| | | | | | Use a macro to generate the in and out functions for pseudotypes that reject all input and output, saving many lines of redundant code. Parameterize the error messages to reduce translatable strings.
* Fix silly nil-pointer-dereference bug introduced in commit d5f6f13f8.Tom Lane2016-11-06
| | | | | | | Don't fetch record->xl_info before we've verified that record isn't NULL. Per Coverity. Michael Paquier
* Adjust cost_merge_append() to reflect use of binaryheap_replace_first().Tom Lane2016-11-05
| | | | | | | | | | | | | | | Commit 7a2fe9bd0 improved merge append so that replacement of a tuple takes log(N) operations, not twice log(N). Since cost_merge_append knew about that explicitly, we should adjust it. This probably makes little difference in practice, but the obsolete comment is confusing. Ideally this would have been put in in 9.3 with the underlying behavior change; but I'm not going to back-patch it, since there's some small chance of changing a plan choice that somebody's optimized for. Thomas Munro Discussion: <CAEepm=0WQBSvuYcMOUj4Ga4NXpu2J=ejZcE=e=eiTjTX-6_gDw@mail.gmail.com>
* Be more consistent about masking xl_info with ~XLR_INFO_MASK.Tom Lane2016-11-04
| | | | | | | | | | | | Generally, WAL resource managers are only supposed to examine the top 4 bits of a WAL record's xl_info; the rest are reserved for the WAL mechanism itself. A few places were not consistent about doing this with respect to XLOG_CHECKPOINT and XLOG_SWITCH records. There's no bug currently, since no additional bits ever get set in these specific record types, but that might not be true forever. Let's follow the generic coding rule here too. Michael Paquier
* Implement syntax for transition tables in AFTER triggers.Kevin Grittner2016-11-04
| | | | | | | | | | | | This is infrastructure for the complete SQL standard feature. No support is included at this point for execution nodes or PLs. The intent is to add that soon. As this patch leaves things, standard syntax can create tuplestores to contain old and/or new versions of rows affected by a statement. References to these tuplestores are in the TriggerData structure. C triggers can access the tuplestores directly, so they are usable, but they cannot yet be referenced within a SQL statement.
* Add API to check if an existing exclusive lock allows cleanup.Robert Haas2016-11-04
| | | | | | | | | | | | | | | | | | | | | | LockBufferForCleanup() acquires a cleanup lock unconditionally, and ConditionalLockBufferForCleanup() acquires a cleanup lock if it is possible to do so without waiting; this patch adds a new API, IsBufferCleanupOK(), which tests whether an exclusive lock already held happens to be a cleanup lock. This is possible because a cleanup lock simply means an exclusive lock plus the assurance any other pins on the buffer are newer than our own pin. Therefore, just as the existing functions decide that the exclusive lock that they've just taken is a cleanup lock if they observe the pin count to be 1, this new function allows us to observe that the pin count is 1 on a buffer we've already locked. This is useful in situations where a backend definitely wishes to modify the buffer and also wishes to perform cleanup operations if possible. The patch to eliminate heavyweight locking by hash indexes uses this, and it may have other applications as well. Amit Kapila, per a suggestion from me. Some comment adjustments by me as well.
* Don't make FK-based selectivity estimates in inheritance situations.Tom Lane2016-11-02
| | | | | | | | | | | | | | | | | | | The foreign-key-aware logic for estimation of join sizes (added in commit 100340e2d) blindly tried to apply the concept to rels that are actually parents of inheritance trees. This is just plain wrong so far as the referenced relation is concerned, since the inheritance scan may well produce lots of rows that are not participating in the constraint. It's wrong for the referencing relation too, for the same reason; although on that end we could conceivably detect whether all members of the inheritance tree have equivalent FK constraints pointing to the same referenced rel, and then proceed more or less as we do now. But pending somebody writing code to do that, we must disable this, because it's producing completely silly estimates when there's an FK linking the heads of inheritance trees. Per bug #14404 from Clinton Adams. Back-patch to 9.6 where the new estimation logic came in. Report: <20161028200412.15987.96482@wrigleys.postgresql.org>
* Don't convert Consts into Vars during setrefs.c processing.Tom Lane2016-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | While converting expressions in an upper-level plan node so that they reference Vars and expressions provided by the input plan node(s), don't convert plain Const items, even if there happens to be a matching Const in the input. It's silly to do so because a Var is more expensive to execute than a Const. Moreover, converting can fool ExecCheckPlanOutput's check that an insert or update query inserts nulls into dropped columns, leading to "query provides a value for a dropped column" errors during INSERT or UPDATE on a table with a dropped column. We could solve this by making that check more complicated, but I don't see the point; this fix should save a marginal number of cycles, and it also makes for less messy EXPLAIN output, as shown by the ensuing regression test result changes. Per report from Pavel Hanák. I have not incorporated a test case based on that example, as there doesn't seem to be a simple way of checking this in isolation without making a bunch of assumptions about other planner and SQL-function behavior. Back-patch to 9.6. This setrefs.c behavior exists much further back, but there is not currently reason to think that it causes problems before 9.6. Discussion: <83shraampf.fsf@is-it.eu>
* Add make rules to download raw Unicode mapping filesPeter Eisentraut2016-11-01
| | | | | | This serves as implicit documentation and is handy if someone wants to tweak things. The rules are not part of a normal build, like this entire directory.
* Fix nasty performance problem in tsquery_rewrite().Tom Lane2016-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | tsquery_rewrite() tries to find matches to subsets of AND/OR conditions; for example, in the query 'a | b | c' the substitution subquery 'a | c' should match and lead to replacement of the first and third items. That's fine, but the matching algorithm apparently takes about O(2^N) for an N-clause query (I say "apparently" because the code is also both unintelligible and uncommented). We could probably do better than that even without any extra assumptions --- but actually, we know that the subclauses are sorted, indeed are depending on that elsewhere in this very same function. So we can just scan the two lists a single time to detect matches, as though we were doing a merge join. Also do a re-flattening call (QTNTernary()) in tsquery_rewrite_query, just to make sure that the tree fits the expectations of the next search cycle. I didn't try to devise a test case for this, but I'm pretty sure that the oversight could have led to failure to match in some cases where a match would be expected. Improve comments, and also stick a CHECK_FOR_INTERRUPTS into dofindsubquery, just in case it's still too slow for somebody. Per report from Andreas Seltenreich. Back-patch to all supported branches. Discussion: <8760oasf2y.fsf@credativ.de>
* Fix bogus tree-flattening logic in QTNTernary().Tom Lane2016-10-30
| | | | | | | | | | | | | | QTNTernary() contains logic to flatten, eg, '(a & b) & c' into 'a & b & c', which is all well and good, but it tries to do that to NOT nodes as well, so that '!!a' gets changed to '!a'. Explicitly restrict the conversion to be done only on AND and OR nodes, and add a test case illustrating the bug. In passing, provide some comments for the sadly naked functions in tsquery_util.c, and simplify some baroque logic in QTNFree(), which I think may have been leaking some items it intended to free. Noted while investigating a complaint from Andreas Seltenreich. Back-patch to all supported versions.
* Improve speed of aggregates that use array_append as transition function.Tom Lane2016-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the previous coding, if an aggregate's transition function returned an expanded array, nodeAgg.c and nodeWindowAgg.c would always copy it and thus force it into the flat representation. This led to ping-ponging between flat and expanded formats, which costs a lot. For an aggregate using array_append as transition function, I measured about a 15X slowdown compared to the pre-9.5 code, when working on simple int[] arrays. Of course, the old code was already O(N^2) in this usage due to copying flat arrays all the time, but it wasn't quite this inefficient. To fix, teach nodeAgg.c and nodeWindowAgg.c to allow expanded transition values without copying, so long as the transition function takes care to return the transition value already properly parented under the aggcontext. That puts a bit of extra responsibility on the transition function, but doing it this way allows us to not need any extra logic in the fast path of advance_transition_function (ie, with a pass-by-value transition value, or with a modified-in-place pass-by-reference value). We already know that that's a hot spot so I'm loath to add any cycles at all there. Also, while only array_append currently knows how to follow this convention, this solution allows other transition functions to opt-in without needing to have a whitelist in the core aggregation code. (The reason we would need a whitelist is that currently, if you pass a R/W expanded-object pointer to an arbitrary function, it's allowed to do anything with it including deleting it; that breaks the core agg code's assumption that it should free discarded values. Returning a value under aggcontext is the transition function's signal that it knows it is an aggregate transition function and will play nice. Possibly the API rules for expanded objects should be refined, but that would not be a back-patchable change.) With this fix, an aggregate using array_append is no longer O(N^2), so it's much faster than pre-9.5 code rather than much slower. It's still a bit slower than the bespoke infrastructure for array_agg, but the differential seems to be only about 10%-20% rather than orders of magnitude. Discussion: <6315.1477677885@sss.pgh.pa.us>
* Fix leftover reference to background writer performing checkpoints.Robert Haas2016-10-28
| | | | | This was changed in PostgreSQL 9.2, but somehow this comment never got updated.