aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Fix SPGiST vacuum algorithm to handle concurrent tuple motion properly.Tom Lane2012-03-12
| | | | | | | | | | | | | A leaf tuple that we need to delete could get moved as a consequence of an insertion happening concurrently with the VACUUM scan. If it moves from a page past the current scan point to a page before, we'll miss it, which is not acceptable. Hence, when we see a leaf-page REDIRECT that could have been made since our scan started, chase down the redirection pointer much as if we were doing a normal index search, and be sure to vacuum every page it leads to. This fixes the issue because, if the tuple was on page N at the instant we start our scan, we will surely find it as a consequence of chasing the redirect from page N, no matter how much it moves around in between. Problem noted by Takashi Yamamoto.
* Use correct sizeof operand in qsort callPeter Eisentraut2012-03-12
| | | | | Probably no practical impact, since all pointers ought to have the same size, but it was wrong nonetheless. Found by Coverity.
* Add comment for missing break in switchPeter Eisentraut2012-03-12
| | | | For clarity, following other sites, and to silence Coverity.
* Make INSERT/UPDATE queries depend on their specific target columns.Tom Lane2012-03-11
| | | | | | | | | | | | | | | | We have always created a whole-table dependency for the target relation, but that's not really good enough, as it doesn't prevent scenarios such as dropping an individual target column or altering its type. So we have to create an individual dependency for each target column, as well. Per report from Bill MacArthur of a rule containing UPDATE breaking after such an alteration. Note that this patch doesn't try to make such cases work, only to ensure that the attempted ALTER TABLE throws an error telling you it can't cope with adjusting the rule. This is a long-standing bug, but given the lack of prior reports I'm not going to risk back-patching it. A back-patch wouldn't do anything to fix existing rules' dependency lists, anyway.
* Teach SPGiST to store nulls and do whole-index scans.Tom Lane2012-03-11
| | | | | | | | | | | | | | | | | | | | | This patch fixes the other major compatibility-breaking limitation of SPGiST, that it didn't store anything for null values of the indexed column, and so could not support whole-index scans or "x IS NULL" tests. The approach is to create a wholly separate search tree for the null entries, and use fixed "allTheSame" insertion and search rules when processing this tree, instead of calling the index opclass methods. This way the opclass methods do not need to worry about dealing with nulls. Catversion bump is for pg_am updates as well as the change in on-disk format of SPGiST indexes; there are some tweaks in SPGiST WAL records as well. Heavily rewritten version of a patch by Oleg Bartunov and Teodor Sigaev. (The original also stored nulls separately, but it reused GIN code to do so; which required undesirable compromises in the on-disk format, and would likely lead to bugs due to the GIN code being required to work in two very different contexts.)
* Add more detail to error message for invalid arguments for server processPeter Eisentraut2012-03-11
| | | | | | | | It now prints the argument that was at fault. Also fix a small misbehavior where the error message issued by getopt() would complain about a program named "--single", because that's what argv[0] is in the server process.
* Restructure SPGiST opclass interface API to support whole-index scans.Tom Lane2012-03-10
| | | | | | | | | | | | | | | | | | | | The original API definition was incapable of supporting whole-index scans because there was no way to invoke leaf-value reconstruction without checking any qual conditions. Also, it was inefficient for multiple-qual-condition scans because value reconstruction got done over again for each qual condition, and because other internal work in the consistent functions likewise had to be done for each qual. To fix these issues, pass the whole scankey array to the opclass consistent functions, instead of only letting them see one item at a time. (Essentially, the loop over scankey entries is now inside the consistent functions not outside them. This makes the consistent functions a bit more complicated, but not unreasonably so.) In itself this commit does nothing except save a few cycles in multiple-qual-condition index scans, since we can't support whole-index scans on SPGiST indexes until nulls are included in the index. However, I consider this a must-fix for 9.2 because once we release it will get very much harder to change the opclass API definition.
* Add support for renaming constraintsPeter Eisentraut2012-03-10
| | | | reviewed by Josh Berkus and Dimitri Fontaine
* Extend object access hook framework to support arguments, and DROP.Robert Haas2012-03-09
| | | | | | | | | This allows loadable modules to get control at drop time, perhaps for the purpose of performing additional security checks or to log the event. The initial purpose of this code is to support sepgsql, but other applications should be possible as well. KaiGai Kohei, reviewed by me.
* Revise FDW planning API, again.Tom Lane2012-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Further reflection shows that a single callback isn't very workable if we desire to let FDWs generate multiple Paths, because that forces the FDW to do all work necessary to generate a valid Plan node for each Path. Instead split the former PlanForeignScan API into three steps: GetForeignRelSize, GetForeignPaths, GetForeignPlan. We had already bit the bullet of breaking the 9.1 FDW API for 9.2, so this shouldn't cause very much additional pain, and it's substantially more flexible for complex FDWs. Add an fdw_private field to RelOptInfo so that the new functions can save state there rather than possibly having to recalculate information two or three times. In addition, we'd not thought through what would be needed to allow an FDW to set up subexpressions of its choice for runtime execution. We could treat ForeignScan.fdw_private as an executable expression but that seems likely to break existing FDWs unnecessarily (in particular, it would restrict the set of node types allowable in fdw_private to those supported by expression_tree_walker). Instead, invent a separate field fdw_exprs which will receive the postprocessing appropriate for expression trees. (One field is enough since it can be a list of expressions; also, we assume the corresponding expression state tree(s) will be held within fdw_state, so we don't need to add anything to ForeignScanState.) Per review of Hanada Shigeru's pgsql_fdw patch. We may need to tweak this further as we continue to work on that patch, but to me it feels a lot closer to being right now.
* Update outdated comment. HeapTupleHeader.t_natts field doesn't exist anymore.Heikki Linnakangas2012-03-09
| | | | Kevin Grittner
* Fix some issues with temp/transient tables in extension scripts.Tom Lane2012-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Phil Sorber reported that a rewriting ALTER TABLE within an extension update script failed, because it creates and then drops a placeholder table; the drop was being disallowed because the table was marked as an extension member. We could hack that specific case but it seems likely that there might be related cases now or in the future, so the most practical solution seems to be to create an exception to the general rule that extension member objects can only be dropped by dropping the owning extension. To wit: if the DROP is issued within the extension's own creation or update scripts, we'll allow it, implicitly performing an "ALTER EXTENSION DROP object" first. This will simplify cases such as extension downgrade scripts anyway. No docs change since we don't seem to have documented the idea that you would need ALTER EXTENSION DROP for such an action to begin with. Also, arrange for explicitly temporary tables to not get linked as extension members in the first place, and the same for the magic pg_temp_nnn schemas that are created to hold them. This prevents assorted unpleasant results if an extension script creates a temp table: the forced drop at session end would either fail or remove the entire extension, and neither of those outcomes is desirable. Note that this doesn't fix the ALTER TABLE scenario, since the placeholder table is not temp (unless the table being rewritten is). Back-patch to 9.1.
* Silence warning about unused variable, when building without assertions.Heikki Linnakangas2012-03-08
|
* Improve estimation of IN/NOT IN by assuming array elements are distinct.Tom Lane2012-03-07
| | | | | | | | | | | | | | | | | | In constructs such as "x IN (1,2,3,4)" and "x <> ALL(ARRAY[1,2,3,4])", we formerly always used a general-purpose assumption that the probability of success is independent for each comparison of "x" to an array element. But in real-world usage of these constructs, that's a pretty poor assumption; it's much saner to assume that the array elements are distinct and so the match probabilities are disjoint. Apply that assumption if the operator appears to behave as equality (for ANY) or inequality (for ALL). But fall back to the normal independent-probabilities calculation if this yields an impossible result, ie probability > 1 or < 0. We could protect ourselves against bad estimates even more by explicitly checking for equal array elements, but that is expensive and doesn't seem worthwhile: doing it would amount to optimizing for poorly-written queries at the expense of well-written ones. Daniele Varrazzo and Tom Lane, after a suggestion by Ants Aasma
* Add GetForeignColumnOptions() to foreign.c, and add some documentation.Tom Lane2012-03-07
| | | | | | | | | | | | | | | | GetForeignColumnOptions provides some abstraction for accessing column-specific FDW options, on a par with the access functions that were already provided here for other FDW-related information. Adjust file_fdw.c to use GetForeignColumnOptions instead of equivalent hand-rolled code. In addition, add some SGML documentation for the functions exported by foreign.c that are meant for use by FDW authors. (This is the fdw_helper portion of the proposed pgsql_fdw patch.) Hanada Shigeru, reviewed by KaiGai Kohei
* Expose an API for calculating catcache hash values.Tom Lane2012-03-07
| | | | | | | | | | | Now that cache invalidation callbacks get only a hash value, and not a tuple TID (per commits 632ae6829f7abda34e15082c91d9dfb3fc0f298b and b5282aa893e565b7844f8237462cb843438cdd5e), the only way they can restrict what they invalidate is to know what the hash values mean. setrefs.c was doing this via a hard-wired assumption but that seems pretty grotty, and it'll only get worse as more cases come up. So let's expose a calculation function that takes the same parameters as SearchSysCache. Per complaint from Marko Kreen.
* Add a hook for processing messages due to be sent to the server log.Tom Lane2012-03-06
| | | | | | | | | | | | Use-cases for this include custom log filtering rules and custom log message transmission mechanisms (for instance, lossy log message collection, which has been discussed several times recently). As is our common practice for hooks, there's no regression test nor user-facing documentation for this, though the author did exhibit a sample module using the hook. Martin Pihlak, reviewed by Marti Raudsepp
* Typo fix.Robert Haas2012-03-06
| | | | Fujii Masao
* Make the comments more clear on the fact that UpdateFullPageWrites() is notHeikki Linnakangas2012-03-06
| | | | safe to call concurrently from multiple processes.
* Remove extra copies of LogwrtResult.Heikki Linnakangas2012-03-06
| | | | | | | | | | | | | | | This simplifies the code a little bit. The new rule is that to update XLogCtl->LogwrtResult, you must hold both WALWriteLock and info_lck, whereas before we had two copies, one that was protected by WALWriteLock and another protected by info_lck. The code that updates them was already holding both locks, so merging the two is trivial. The third copy, XLogCtl->Insert.LogwrtResult, was not totally redundant, it was used in AdvanceXLInsertBuffer to update the backend-local copy, before acquiring the info_lck to read the up-to-date value. But the value of that seems dubious; at best it's saving one spinlock acquisition per completed WAL page, which is not significant compared to all the other work involved. And in practice, it's probably not saving even that much.
* Simplify the way changes to full_page_writes are logged.Heikki Linnakangas2012-03-06
| | | | | | | | | | | | | | | It's harmless to do full page writes even when not strictly necessary, so when turning full_page_writes on, we can set the global flag first, and then call XLogInsert. Likewise, when turning it off, we can write the WAL record first, and then clear the flag. This way XLogInsert doesn't need any special handling of the XLOG_FPW_CHANGE record type. XLogInsert is complicated enough already, so anything we can keep away from there is a good thing. Actually I don't think the atomicity of the shared memory flag matters, anyway, because we only write the XLOG_FPW_CHANGE at the end of recovery, when there are no concurrent WAL insertions going on. But might as well make it safe, in case we allow changing full_page_writes on the fly in the future.
* Redesign PlanForeignScan API to allow multiple paths for a foreign table.Tom Lane2012-03-05
| | | | | | | | | | | The original API specification only allowed an FDW to create a single access path, which doesn't seem like a terribly good idea in hindsight. Instead, move the responsibility for building the Path node and calling add_path() into the FDW's PlanForeignScan function. Now, it can do that more than once if appropriate. There is no longer any need for the transient FdwPlan struct, so get rid of that. Etsuro Fujita, Shigeru Hanada, Tom Lane
* Rewrite GiST support code for rangetypes.Tom Lane2012-03-04
| | | | | | | | | | This patch installs significantly smarter penalty and picksplit functions for ranges, making GiST indexes for them smaller and faster to search. There is no on-disk format change, so no catversion bump, but you'd need to REINDEX to get the benefits for any existing index. Alexander Korotkov, reviewed by Jeff Davis
* Remove useless "rough estimate" path from mcelem_array_contained_selec.Tom Lane2012-03-04
| | | | | | | | | The code in this function that tried to cope with a missing count histogram was quite ineffective for anything except a perfectly flat distribution. Furthermore, since we were already punting for missing MCELEM slot, it's rather useless to sweat over missing DECHIST: there are no cases where ANALYZE will create the first but not the second. So just simplify the code by punting rather than pretending we can do something useful.
* Improve histogram-filling loop in new compute_array_stats() code.Tom Lane2012-03-04
| | | | | | | | Do "frac" arithmetic in int64 to prevent overflow with large statistics targets, and improve the comments so people have some chance of understanding how it works. Alexander Korotkov and Tom Lane
* More carefully validate xlog location string inputsMagnus Hagander2012-03-04
| | | | | | | Now that we have validate_xlog_location, call it from the previously existing functions taking xlog locatoins as a string input. Suggested by Fujii Masao
* Add function pg_xlog_location_diff to help comparisonsMagnus Hagander2012-03-04
| | | | | | | | Comparing two xlog locations are useful for example when calculating replication lag. Euler Taveira de Oliveira, reviewed by Fujii Masao, and some cleanups from me
* Collect and use element-frequency statistics for arrays.Tom Lane2012-03-03
| | | | | | | | | | | | | | | | | | | | | | | | This patch improves selectivity estimation for the array <@, &&, and @> (containment and overlaps) operators. It enables collection of statistics about individual array element values by ANALYZE, and introduces operator-specific estimators that use these stats. In addition, ScalarArrayOpExpr constructs of the forms "const = ANY/ALL (array_column)" and "const <> ANY/ALL (array_column)" are estimated by treating them as variants of the containment operators. Since we still collect scalar-style stats about the array values as a whole, the pg_stats view is expanded to show both these stats and the array-style stats in separate columns. This creates an incompatible change in how stats for tsvector columns are displayed in pg_stats: the stats about lexemes are now displayed in the array-related columns instead of the original scalar-related columns. There are a few loose ends here, notably that it'd be nice to be able to suppress either the scalar-style stats or the array-element stats for columns for which they're not useful. But the patch is in good enough shape to commit for wider testing. Alexander Korotkov, reviewed by Noah Misch and Nathan Boley
* Allow CREATE TABLE (LIKE ...) from composite typePeter Eisentraut2012-03-03
| | | | | | The only reason this didn't work before was that parserOpenTable() rejects composite types. So use relation_openrv() directly and manually do the errposition() setup that parserOpenTable() does.
* Allow child-relation entries to be made in ec_has_const EquivalenceClasses.Tom Lane2012-03-02
| | | | | | | | | | | | | | | | | | | | | | This fixes an oversight in commit 11cad29c91524aac1d0b61e0ea0357398ab79bf8, which introduced MergeAppend plans. Before that happened, we never particularly cared about the sort ordering of scans of inheritance child relations, since appending their outputs together would destroy any ordering anyway. But now it's important to be able to match child relation sort orderings to those of the surrounding query. The original coding of add_child_rel_equivalences skipped ec_has_const EquivalenceClasses, on the originally-correct grounds that adding child expressions to them was useless. The effect of this is that when a parent variable is equated to a constant, we can't recognize that index columns on the equivalent child variables are not sort-significant; that is, we can't recognize that a child index on, say, (x, y) is able to generate output in "ORDER BY y" order when there is a clause "WHERE x = constant". Adding child expressions to the (x, constant) EquivalenceClass fixes this, without any downside that I can see other than a few more planner cycles expended on such queries. Per recent gripe from Robert McGehee. Back-patch to 9.1 where MergeAppend was introduced.
* Add COLLATION FOR expressionPeter Eisentraut2012-03-02
| | | | reviewed by Jaime Casanova
* When a GiST page is split during index build, it might not have a buffer.Heikki Linnakangas2012-03-02
| | | | | | | | | | | | | Previously it was thought that it's impossible as the code stands, because insertions create buffers as tuples are cascaded downwards, and index split also creaters buffers eagerly for all halves. But the example from Jay Levitt demonstrates that it can happen, when the root page is split. It's in fact OK if the buffer doesn't exist, so we just need to remove the sanity check. In fact, we've been discussing the possibility of destroying empty buffers to conserve memory, which would render the sanity check completely useless anyway. Fix by Alexander Korotkov
* Remove TOAST table from pg_databaseAlvaro Herrera2012-03-01
| | | | | | | | | The only toastable column now is datacl, but we don't really support long ACLs anyway. The TOAST table should have been removed when the pg_db_role_setting catalog was introduced in commit 2eda8dfb52ed9962920282d8384da8bb4c22514d, but I forgot to do that. Per -hackers discussion on March 2011.
* Correctly detect SSI conflicts of prepared transactions after crash.Heikki Linnakangas2012-02-29
| | | | | | | | | | | | A prepared transaction can get new conflicts in and out after preparing, so we cannot rely on the in- and out-flags stored in the statefile at prepare- time. As a quick fix, make the conservative assumption that after a restart, all prepared transactions are considered to have both in- and out-conflicts. That can lead to unnecessary rollbacks after a crash, but that shouldn't be a big problem in practice; you don't want prepared transactions to hang around for a long time anyway. Dan Ports
* Move CRC tables to libpgport, and provide them in a separate include file.Tom Lane2012-02-28
| | | | | | | | | | | | | | | | | | This makes it much more convenient to build tools for Postgres that are separately compiled and require a matching CRC implementation. To prevent multiple copies of the CRC polynomial tables being introduced into the postgres binaries, they are now included in the static library libpgport that is mainly meant for replacement system functions. That seems like a bit of a kludge, but there's no better place. This cleans up building of the tools pg_controldata and pg_resetxlog, which previously had to build their own copies of pg_crc.o. In the future, external programs that need access to the CRC tables can include the tables directly from the new header file pg_crc_tables.h. Daniel Farina, reviewed by Abhijit Menon-Sen and Tom Lane
* Fix thinko in new match_join_clauses_to_index() logic.Tom Lane2012-02-28
| | | | | | | | | | | | We don't need to constrain the other side of an indexable join clause to not be below an outer join; an example here is SELECT FROM t1 LEFT JOIN t2 ON t1.a = t2.b LEFT JOIN t3 ON t2.c = t3.d; We can consider an inner indexscan on t3.d using c = d as indexqual, even though t2.c is potentially nulled by a previous outer join. The comparable logic in orindxpath.c has always worked that way, but I was being overly cautious here.
* Add const qualifiers where they are accidentally cast awayPeter Eisentraut2012-02-28
| | | | | This only produces warnings under -Wcast-qual, but it's more correct and consistent in any case.
* ALTER TABLE: skip FK validation when it's safe to do soAlvaro Herrera2012-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | We already skip rewriting the table in these cases, but we still force a whole table scan to validate the data. This can be skipped, and thus we can make the whole ALTER TABLE operation just do some catalog touches instead of scanning the table, when these two conditions hold: (a) Old and new pg_constraint.conpfeqop match exactly. This is actually stronger than needed; we could loosen things by way of operator families, but it'd require a lot more effort. (b) The functions, if any, implementing a cast from the foreign type to the primary opcintype are the same. For this purpose, we can consider a binary coercion equivalent to an exact type match. When the opcintype is polymorphic, require that the old and new foreign types match exactly. (Since ri_triggers.c does use the executor, the stronger check for polymorphic types is no mere future-proofing. However, no core type exercises its necessity.) Author: Noah Misch Committer's note: catalog version bumped due to change of the Constraint node. I can't actually find any way to have such a node in a stored rule, but given that we have "out" support for them, better be safe.
* Call check_keywords.pl in maintainer-checkPeter Eisentraut2012-02-27
| | | | | For that purpose, have check_keywords.pl print errors to stderr and return a useful exit status.
* Fix some more bugs in GIN's WAL replay logic.Tom Lane2012-02-26
| | | | | | | | | | | | | | | | | | In commit 4016bdef8aded77b4903c457050622a5a1815c16 I fixed a bunch of ginxlog.c bugs having to do with not handling XLogReadBuffer failures correctly. However, in ginRedoUpdateMetapage and ginRedoDeleteListPages, I unaccountably thought that failure to read the metapage would be impossible and just put in an elog(PANIC) call. This is of course wrong: failure is exactly what will happen if the index got dropped (or rebuilt) between creation of the WAL record and the crash we're trying to recover from. I believe this explains Nicholas Wilson's recent report of these errors getting reached. Also, fix memory leak in forgetIncompleteSplit. This wasn't of much concern when the code was written, but in a long-running standby server page split records could be expected to accumulate indefinitely. Back-patch to 8.4 --- before that, GIN didn't have a metapage.
* Remove useless castPeter Eisentraut2012-02-26
|
* Remove useless const qualifierPeter Eisentraut2012-02-26
| | | | | | | Claiming that the typevar argument to DefineCompositeType() is const was a plain lie. A similar case in DefineVirtualRelation() was already changed in passing in commit 1575fbcb. Also clean up the now unnecessary casts that used to cast away the const.
* Merge dissect() into cdissect() to remove a pile of near-duplicate code.Tom Lane2012-02-24
| | | | | | | | | The "uncomplicated" case isn't materially less complicated than the full case, certainly not enough so to justify duplicating nearly 500 lines of code. The only extra work being done in the full path is zaptreesubs, which is very cheap compared to everything else being done here, and besides that I'm less than convinced that it's not needed in some cases even without backrefs.
* Avoid repeated creation/freeing of per-subre DFAs during regex search.Tom Lane2012-02-24
| | | | | | | | | | | | In nested sub-regex trees, lower-level nodes created DFAs and then destroyed them again before exiting, which is a bit dumb considering that the recursive search is likely to call those nodes again later. Instead cache each created DFA until the end of pg_regexec(). This is basically a space for time tradeoff, in that it might increase the maximum memory usage. However, in most regex patterns there are not all that many subre nodes, so not that many DFAs --- and in any case, the peak usage occurs when reaching the bottom recursion level, and except for alternation cases that's going to be the same anyway.
* Remove useless "retry memory" logic within regex engine.Tom Lane2012-02-24
| | | | | | | | | | | | | | | | | | Apparently some primordial version of Spencer's engine needed cdissect() and child functions to be able to continue matching from a previous position when re-called. That is dead code, though, since trivial inspection shows that cdissect can never be entered without having previously done zapmem which resets the relevant retry counter. I have also verified experimentally that no case in the Tcl regression tests reaches cdissect with a nonzero retry value. Accordingly, remove that logic. This doesn't really save any noticeable number of cycles in itself, but it is one step towards making dissect() and cdissect() equivalent, which will allow removing hundreds of lines of near-duplicated code. Since struct subre's "retry" field is no longer particularly related to any kind of retry, rename it to "id". As of this commit it's only used for identifying a subre node in debug printouts, so you might think we should get rid of the field entirely; but I have a plan for another use.
* Add some enumeration commas, for consistencyPeter Eisentraut2012-02-24
|
* Fix the general case of quantified regex back-references.Tom Lane2012-02-24
| | | | | | | | | | | | | | Cases where a back-reference is part of a larger subexpression that is quantified have never worked in Spencer's regex engine, because he used a compile-time transformation that neglected the need to check the back-reference match in iterations before the last one. (That was okay for capturing parens, and we still do it if the regex has *only* capturing parens ... but it's not okay for backrefs.) To make this work properly, we have to add an "iteration" node type to the regex engine's vocabulary of sub-regex nodes. Since this is a moderately large change with a fair risk of introducing new bugs of its own, apply to HEAD only, even though it's a fix for a longstanding bug.
* Correctly handle NULLs in JSON output.Andrew Dunstan2012-02-23
| | | | Error reported by David Wheeler.
* Remove arbitrary limitation on length of common name in SSL certificates.Tom Lane2012-02-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both libpq and the backend would truncate a common name extracted from a certificate at 32 bytes. Replace that fixed-size buffer with dynamically allocated string so that there is no hard limit. While at it, remove the code for extracting peer_dn, which we weren't using for anything; and don't bother to store peer_cn longer than we need it in libpq. This limit was not so terribly unreasonable when the code was written, because we weren't using the result for anything critical, just logging it. But now that there are options for checking the common name against the server host name (in libpq) or using it as the user's name (in the server), this could result in undesirable failures. In the worst case it even seems possible to spoof a server name or user name, if the correct name is exactly 32 bytes and the attacker can persuade a trusted CA to issue a certificate in which that string is a prefix of the certificate's common name. (To exploit this for a server name, he'd also have to send the connection astray via phony DNS data or some such.) The case that this is a realistic security threat is a bit thin, but nonetheless we'll treat it as one. Back-patch to 8.4. Older releases contain the faulty code, but it's not a security problem because the common name wasn't used for anything interesting. Reported and patched by Heikki Linnakangas Security: CVE-2012-0867
* Require execute permission on the trigger function for CREATE TRIGGER.Tom Lane2012-02-23
| | | | | | | | | | | | | | | | | This check was overlooked when we added function execute permissions to the system years ago. For an ordinary trigger function it's not a big deal, since trigger functions execute with the permissions of the table owner, so they couldn't do anything the user issuing the CREATE TRIGGER couldn't have done anyway. However, if a trigger function is SECURITY DEFINER, that is not the case. The lack of checking would allow another user to install it on his own table and then invoke it with, essentially, forged input data; which the trigger function is unlikely to realize, so it might do something undesirable, for instance insert false entries in an audit log table. Reported by Dinesh Kumar, patch by Robert Haas Security: CVE-2012-0866