aboutsummaryrefslogtreecommitdiff
path: root/src/backend/commands
Commit message (Collapse)AuthorAge
* Fix assorted bugs in CREATE INDEX CONCURRENTLY.Tom Lane2012-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes CREATE INDEX CONCURRENTLY so that the pg_index flag changes it makes without exclusive lock on the index are made via heap_inplace_update() rather than a normal transactional update. The latter is not very safe because moving the pg_index tuple could result in concurrent SnapshotNow scans finding it twice or not at all, thus possibly resulting in index corruption. In addition, fix various places in the code that ought to check to make sure that the indexes they are manipulating are valid and/or ready as appropriate. These represent bugs that have existed since 8.2, since a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid index behind, and we ought not try to do anything that might fail with such an index. Also fix RelationReloadIndexInfo to ensure it copies all the pg_index columns that are allowed to change after initial creation. Previously we could have been left with stale values of some fields in an index relcache entry. It's not clear whether this actually had any user-visible consequences, but it's at least a bug waiting to happen. This is a subset of a patch already applied in 9.2 and HEAD. Back-patch into all earlier supported branches. Tom Lane and Andres Freund
* Fix handling of inherited check constraints in ALTER COLUMN TYPE.Tom Lane2012-11-05
| | | | | | | | | | | | This case got broken in 8.4 by the addition of an error check that complains if ALTER TABLE ONLY is used on a table that has children. We do use ONLY for this situation, but it's okay because the necessary recursion occurs at a higher level. So we need to have a separate flag to suppress recursion without making the error check. Reported and patched by Pavan Deolasee, with some editorial adjustments by me. Back-patch to 8.4, since this is a regression of functionality that worked in earlier branches.
* Fix ALTER EXTENSION / SET SCHEMAAlvaro Herrera2012-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | In its original conception, it was leaving some objects into the old schema, but without their proper pg_depend entries; this meant that the old schema could be dropped, causing future pg_dump calls to fail on the affected database. This was originally reported by Jeff Frost as #6704; there have been other complaints elsewhere that can probably be traced to this bug. To fix, be more consistent about altering a table's subsidiary objects along the table itself; this requires some restructuring in how tables are relocated when altering an extension -- hence the new AlterTableNamespaceInternal routine which encapsulates it for both the ALTER TABLE and the ALTER EXTENSION cases. There was another bug lurking here, which was unmasked after fixing the previous one: certain objects would be reached twice via the dependency graph, and the second attempt to move them would cause the entire operation to fail. Per discussion, it seems the best fix for this is to do more careful tracking of objects already moved: we now maintain a list of moved objects, to avoid attempting to do it twice for the same object. Authors: Alvaro Herrera, Dimitri Fontaine Reviewed by Tom Lane
* Fix issues with checks for unsupported transaction states in Hot Standby.Tom Lane2012-08-24
| | | | | | | | | | | | | | | | | | | | | | | | | The GUC check hooks for transaction_read_only and transaction_isolation tried to check RecoveryInProgress(), so as to disallow setting read/write mode or serializable isolation level (respectively) in hot standby sessions. However, GUC check hooks can be called in many situations where we're not connected to shared memory at all, resulting in a crash in RecoveryInProgress(). Among other cases, this results in EXEC_BACKEND builds crashing during child process start if default_transaction_isolation is serializable, as reported by Heikki Linnakangas. Protect those calls by silently allowing any setting when not inside a transaction; which is okay anyway since these GUCs are always reset at start of transaction. Also, add a check to GetSerializableTransactionSnapshot() to complain if we are in hot standby. We need that check despite the one in check_XactIsoLevel() because default_transaction_isolation could be serializable. We don't want to complain any sooner than this in such cases, since that would prevent running transactions at all in such a state; but a transaction can be run, if SET TRANSACTION ISOLATION is done before setting a snapshot. Per report some months ago from Robert Haas. Back-patch to 9.1, since these problems were introduced by the SSI patch. Kevin Grittner and Tom Lane, with ideas from Heikki Linnakangas
* Disallow extensions from owning the schema they are assigned to.Tom Lane2012-08-15
| | | | | | | | | | | This situation creates a dependency loop that confuses pg_dump and probably other things. Moreover, since the mental model is that the extension "contains" schemas it owns, but "is contained in" its extschema (even though neither is strictly true), having both true at once is confusing for people too. So prevent the situation from being set up. Reported and patched by Thom Brown. Back-patch to 9.1 where extensions were added.
* Fix dependencies generated during ALTER TABLE ADD CONSTRAINT USING INDEX.Tom Lane2012-08-11
| | | | | | | | | | | | | | | | | | This command generated new pg_depend entries linking the index to the constraint and the constraint to the table, which match the entries made when a unique or primary key constraint is built de novo. However, it did not bother to get rid of the entries linking the index directly to the table. We had considered the issue when the ADD CONSTRAINT USING INDEX patch was written, and concluded that we didn't need to get rid of the extra entries. But this is wrong: ALTER COLUMN TYPE wasn't expecting such redundant dependencies to exist, as reported by Hubert Depesz Lubaczewski. On reflection it seems rather likely to break other things as well, since there are many bits of code that crawl pg_depend for one purpose or another, and most of them are pretty naive about what relationships they're expecting to find. Fortunately it's not that hard to get rid of the extra dependency entries, so let's do that. Back-patch to 9.1, where ALTER TABLE ADD CONSTRAINT USING INDEX was added.
* Fix longstanding crash-safety bug with newly-created-or-reset sequences.Tom Lane2012-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a crash occurred immediately after the first nextval() call for a serial column, WAL replay would restore the sequence to a state in which it appeared that no nextval() had been done, thus allowing the first sequence value to be returned again by the next nextval() call; as reported in bug #6748 from Xiangming Mei. More generally, the problem would occur if an ALTER SEQUENCE was executed on a freshly created or reset sequence. (The manifestation with serial columns was introduced in 8.2 when we added an ALTER SEQUENCE OWNED BY step to serial column creation.) The cause is that sequence creation attempted to save one WAL entry by writing out a WAL record that made it appear that the first nextval() had already happened (viz, with is_called = true), while marking the sequence's in-database state with log_cnt = 1 to show that the first nextval() need not emit a WAL record. However, ALTER SEQUENCE would emit a new WAL entry reflecting the actual in-database state (with is_called = false). Then, nextval would allocate the first sequence value and set is_called = true, but it would trust the log_cnt value and not emit any WAL record. A crash at this point would thus restore the sequence to its post-ALTER state, causing the next nextval() call to return the first sequence value again. To fix, get rid of the idea of logging an is_called status different from reality. This means that the first nextval-driven WAL record will happen at the first nextval call not the second, but the marginal cost of that is pretty negligible. In addition, make sure that ALTER SEQUENCE resets log_cnt to zero in any case where it touches sequence parameters that affect future nextval results. This will result in some user-visible changes in the contents of a sequence's log_cnt column, as reflected in the patch's regression test changes; but no application should be depending on that anyway, since it was already true that log_cnt changes rather unpredictably depending on checkpoint timing. In addition, make some basically-cosmetic improvements to get rid of sequence.c's undesirable intimacy with page layout details. It was always really trying to WAL-log the contents of the sequence tuple, so we should have it do that directly using a HeapTuple's t_data and t_len, rather than backing into it with some magic assumptions about where the tuple would be on the sequence's page. Back-patch to all supported branches.
* Have REASSIGN OWNED work on extensions, tooAlvaro Herrera2012-07-03
| | | | | | | | | | | | | | | | | Per bug #6593, REASSIGN OWNED fails when the affected role has created an extension. Even though the user related to the extension is not nominally the owner, its OID appears on pg_shdepend and thus causes problems when the user is to be dropped. This commit adds code to change the "ownership" of the extension itself, not of the contained objects. This is fine because it's currently only called from REASSIGN OWNED, which would also modify the ownership of the contained objects. However, this is not sufficient for a working ALTER OWNER implementation extension. Back-patch to 9.1, where extensions were introduced. Bug #6593 reported by Emiliano Leporati.
* Prevent CREATE TABLE LIKE/INHERITS from (mis) copying whole-row Vars.Tom Lane2012-06-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If a CHECK constraint or index definition contained a whole-row Var (that is, "table.*"), an attempt to copy that definition via CREATE TABLE LIKE or table inheritance produced incorrect results: the copied Var still claimed to have the rowtype of the source table, rather than the created table. For the LIKE case, it seems reasonable to just throw error for this situation, since the point of LIKE is that the new table is not permanently coupled to the old, so there's no reason to assume its rowtype will stay compatible. In the inheritance case, we should ideally allow such constraints, but doing so will require nontrivial refactoring of CREATE TABLE processing (because we'd need to know the OID of the new table's rowtype before we adjust inherited CHECK constraints). In view of the lack of previous complaints, that doesn't seem worth the risk in a back-patched bug fix, so just make it throw error for the inheritance case as well. Along the way, replace change_varattnos_of_a_node() with a more robust function map_variable_attnos(), which is capable of being extended to handle insertion of ConvertRowtypeExpr whenever we get around to fixing the inheritance case nicely, and in the meantime it returns a failure indication to the caller so that a helpful message with some context can be thrown. Also, this code will do the right thing with subselects (if we ever allow them in CHECK or indexes), and it range-checks varattnos before using them to index into the map array. Per report from Sergey Konoplev. Back-patch to all supported branches.
* Fix NOTIFY to cope with I/O problems, such as out-of-disk-space.Tom Lane2012-06-29
| | | | | | | | | | | | | The LISTEN/NOTIFY subsystem got confused if SimpleLruZeroPage failed, which would typically happen as a result of a write() failure while attempting to dump a dirty pg_notify page out of memory. Subsequently, all attempts to send more NOTIFY messages would fail with messages like "Could not read from file "pg_notify/nnnn" at offset nnnnn: Success". Only restarting the server would clear this condition. Per reports from Kevin Grittner and Christoph Berg. Back-patch to 9.0, where the problem was introduced during the LISTEN/NOTIFY rewrite.
* Fix DROP TABLESPACE to unlink symlink when directory is not there.Tom Lane2012-05-13
| | | | | | | | | | | | | | | | | | | | | If the tablespace directory is missing entirely, we allow DROP TABLESPACE to go through, on the grounds that it should be possible to clean up the catalog entry in such a situation. However, we forgot that the pg_tblspc symlink might still be there. We should try to remove the symlink too (but not fail if it's no longer there), since not doing so can lead to weird behavior subsequently, as per report from Michael Nolan. There was some discussion of adding dependency links to prevent DROP TABLESPACE when the catalogs still contain references to the tablespace. That might be worth doing too, but it's an orthogonal question, and in any case wouldn't be back-patchable. Back-patch to 9.0, which is as far back as the logic looks like this. We could possibly do something similar in 8.x, but given the lack of reports I'm not sure it's worth the trouble, and anyway the case could not arise in the form the logic is meant to cover (namely, a post-DROP transaction rollback having resurrected the pg_tablespace entry after some or all of the filesystem infrastructure is gone).
* Prevent loss of init fork when truncating an unlogged table.Robert Haas2012-05-11
| | | | Fixes bug #6635, reported by Akira Kurosawa.
* Fix COPY FROM for null marker strings that correspond to invalid encoding.Tom Lane2012-03-25
| | | | | | | | | | | | | The COPY documentation says "COPY FROM matches the input against the null string before removing backslashes". It is therefore reasonable to presume that null markers like E'\\0' will work ... and they did, until someone put the tests in the wrong order during microoptimization-driven rewrites. Since then, we've been failing if the null marker is something that would de-escape to an invalidly-encoded string. Since null markers generally need to be something that can't appear in the data, this represents a nontrivial loss of functionality; surprising nobody noticed it earlier. Per report from Jeff Davis. Backpatch to 8.4 where this got broken.
* Fix some issues with temp/transient tables in extension scripts.Tom Lane2012-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Phil Sorber reported that a rewriting ALTER TABLE within an extension update script failed, because it creates and then drops a placeholder table; the drop was being disallowed because the table was marked as an extension member. We could hack that specific case but it seems likely that there might be related cases now or in the future, so the most practical solution seems to be to create an exception to the general rule that extension member objects can only be dropped by dropping the owning extension. To wit: if the DROP is issued within the extension's own creation or update scripts, we'll allow it, implicitly performing an "ALTER EXTENSION DROP object" first. This will simplify cases such as extension downgrade scripts anyway. No docs change since we don't seem to have documented the idea that you would need ALTER EXTENSION DROP for such an action to begin with. Also, arrange for explicitly temporary tables to not get linked as extension members in the first place, and the same for the magic pg_temp_nnn schemas that are created to hold them. This prevents assorted unpleasant results if an extension script creates a temp table: the forced drop at session end would either fail or remove the entire extension, and neither of those outcomes is desirable. Note that this doesn't fix the ALTER TABLE scenario, since the placeholder table is not temp (unless the table being rewritten is). Back-patch to 9.1.
* Require execute permission on the trigger function for CREATE TRIGGER.Tom Lane2012-02-23
| | | | | | | | | | | | | | | | | This check was overlooked when we added function execute permissions to the system years ago. For an ordinary trigger function it's not a big deal, since trigger functions execute with the permissions of the table owner, so they couldn't do anything the user issuing the CREATE TRIGGER couldn't have done anyway. However, if a trigger function is SECURITY DEFINER, that is not the case. The lack of checking would allow another user to install it on his own table and then invoke it with, essentially, forged input data; which the trigger function is unlikely to realize, so it might do something undesirable, for instance insert false entries in an audit log table. Reported by Dinesh Kumar, patch by Robert Haas Security: CVE-2012-0866
* Remove inappropriate quotesPeter Eisentraut2012-02-23
| | | | And adjust wording for consistency.
* REASSIGN OWNED: Support foreign data wrappers and serversAlvaro Herrera2012-02-22
| | | | | | | This was overlooked when implementing those kinds of objects, in commit cae565e503c42a0942ca1771665243b4453c5770. Per report from Pawel Casperek.
* Run a portal's cleanup hook immediately when pushing it to FAILED state.Tom Lane2012-02-15
| | | | | | | | | | | | | | | | This extends the changes of commit 6252c4f9e201f619e5eebda12fa867acd4e4200e so that we run the cleanup hook earlier for failure cases as well as success cases. As before, the point is to avoid an assertion failure from an Assert I added in commit a874fe7b4c890d1fe3455215a83ca777867beadd, which was meant to check that no user-written code can be called during portal cleanup. This fixes a case reported by Pavan Deolasee in which the Assert could be triggered during backend exit (see the new regression test case), and also prevents the possibility that the cleanup hook is run after portions of the portal's state have already been recycled. That doesn't really matter in current usage, but it foreseeably could matter in the future. Back-patch to 9.1 where the Assert in question was added.
* Avoid throwing ERROR during WAL replay of DROP TABLESPACE.Tom Lane2012-02-06
| | | | | | | | | | | | | | | | | | | | | | Although we will not even issue an XLOG_TBLSPC_DROP WAL record unless removal of the tablespace's directories succeeds, that does not guarantee that the same operation will succeed during WAL replay. Foreseeable reasons for it to fail include temp files created in the tablespace by Hot Standby backends, wrong directory permissions on a standby server, etc etc. The original coding threw ERROR if replay failed to remove the directories, but that is a serious overreaction. Throwing an error aborts recovery, and worse means that manual intervention will be needed to get the database to start again, since otherwise the same error will recur on subsequent attempts to replay the same WAL record. And the consequence of failing to remove the directories is only that some probably-small amount of disk space is wasted, so it hardly seems justified to throw an error. Accordingly, arrange to report such failures as LOG messages and keep going when a failure occurs during replay. Back-patch to 9.0 where Hot Standby was introduced. In principle such problems can occur in earlier releases, but Hot Standby increases the odds of trouble significantly. Given the lack of field reports of such issues, I'm satisfied with patching back as far as the patch applies easily.
* Fix transient clobbering of shared buffers during WAL replay.Tom Lane2012-02-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | RestoreBkpBlocks was in the habit of zeroing and refilling the target buffer; which was perfectly safe when the code was written, but is unsafe during Hot Standby operation. The reason is that we have coding rules that allow backends to continue accessing a tuple in a heap relation while holding only a pin on its buffer. Such a backend could see transiently zeroed data, if WAL replay had occasion to change other data on the page. This has been shown to be the cause of bug #6425 from Duncan Rance (who deserves kudos for developing a sufficiently-reproducible test case) as well as Bridget Frey's re-report of bug #6200. It most likely explains the original report as well, though we don't yet have confirmation of that. To fix, change the code so that only bytes that are supposed to change will change, even transiently. This actually saves cycles in RestoreBkpBlocks, since it's not writing the same bytes twice. Also fix seq_redo, which has the same disease, though it has to work a bit harder to meet the requirement. So far as I can tell, no other WAL replay routines have this type of bug. In particular, the index-related replay routines, which would certainly be broken if they had to meet the same standard, are not at risk because we do not have coding rules that allow access to an index page when not holding a buffer lock on it. Back-patch to 9.0 where Hot Standby was added.
* Accept a non-existent value in "ALTER USER/DATABASE SET ..." command.Heikki Linnakangas2012-01-30
| | | | | | | | | | | | | | | | | | | | | | | When default_text_search_config, default_tablespace, or temp_tablespaces setting is set per-user or per-database, with an "ALTER USER/DATABASE SET ..." statement, don't throw an error if the text search configuration or tablespace does not exist. In case of text search configuration, even if it doesn't exist in the current database, it might exist in another database, where the setting is intended to have its effect. This behavior is now the same as search_path's. Tablespaces are cluster-wide, so the same argument doesn't hold for tablespaces, but there's a problem with pg_dumpall: it dumps "ALTER USER SET ..." statements before the "CREATE TABLESPACE" statements. Arguably that's pg_dumpall's fault - it should dump the statements in such an order that the tablespace is created first and then the "ALTER USER SET default_tablespace ..." statements after that - but it seems better to be consistent with search_path and default_text_search_config anyway. Besides, you could still create a dump that throws an error, by creating the tablespace, running "ALTER USER SET default_tablespace", then dropping the tablespace and running pg_dumpall on that. Backpatch to all supported versions.
* Fix CLUSTER/VACUUM FULL for toast values owned by recently-updated rows.Tom Lane2012-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | In commit 7b0d0e9356963d5c3e4d329a917f5fbb82a2ef05, I made CLUSTER and VACUUM FULL try to preserve toast value OIDs from the original toast table to the new one. However, if we have to copy both live and recently-dead versions of a row that has a toasted column, those versions may well reference the same toast value with the same OID. The patch then led to duplicate-key failures as we tried to insert the toast value twice with the same OID. (The previous behavior was not very desirable either, since it would have silently inserted the same value twice with different OIDs. That wastes space, but what's worse is that the toast values inserted for already-dead heap rows would not be reclaimed by subsequent ordinary VACUUMs, since they go into the new toast table marked live not deleted.) To fix, check if the copied OID already exists in the new toast table, and if so, assume that it stores the desired value. This is reasonably safe since the only case where we will copy an OID from a previous toast pointer is when toast_insert_or_update was given that toast pointer and so we just pulled the data from the old table; if we got two different values that way then we have big problems anyway. We do have to assume that no other backend is inserting items into the new toast table concurrently, but that's surely safe for CLUSTER and VACUUM FULL. Per bug #6393 from Maxim Boguk. Back-patch to 9.0, same as the previous patch.
* Update per-column ACLs, not only per-table ACL, when changing table owner.Tom Lane2011-12-21
| | | | | | | | | We forgot to modify column ACLs, so privileges were still shown as having been granted by the old owner. This meant that neither the new owner nor a superuser could revoke the now-untraceable-to-table-owner permissions. Per bug #6350 from Marc Balmer. This has been wrong since column ACLs were added, so back-patch to 8.4.
* Disallow deletion of CurrentExtensionObject while running extension script.Tom Lane2011-11-28
| | | | | | | | | | While the deletion in itself wouldn't break things, any further creation of objects in the script would result in dangling pg_depend entries being added by recordDependencyOnCurrentExtension(). An example from Phil Sorber convinced me that this is just barely likely enough to be worth expending a couple lines of code to defend against. The resulting error message might be confusing, but it's better than leaving corrupted catalog contents for the user to deal with.
* Change FK trigger creation order to better support self-referential FKs.Tom Lane2011-10-26
| | | | | | | | | | | | | | | | | | | | | | | | | | When a foreign-key constraint references another column of the same table, row updates will queue both the PK's ON UPDATE action and the FK's CHECK action in the same event. The ON UPDATE action must execute first, else the CHECK will check a non-final state of the row and possibly throw an inappropriate error, as seen in bug #6268 from Roman Lytovchenko. Now, the firing order of multiple triggers for the same event is determined by the sort order of their pg_trigger.tgnames, and the auto-generated names we use for FK triggers are "RI_ConstraintTrigger_NNNN" where NNNN is the trigger OID. So most of the time the firing order is the same as creation order, and so rearranging the creation order fixes it. This patch will fail to fix the problem if the OID counter wraps around or adds a decimal digit (eg, from 99999 to 100000) while we are creating the triggers for an FK constraint. Given the small odds of that, and the low usage of self-referential FKs, we'll live with that solution in the back branches. A better fix is to change the auto-generated names for FK triggers, but it seems unwise to do that in stable branches because there may be client code that depends on the naming convention. We'll fix it that way in HEAD in a separate patch. Back-patch to all supported branches, since this bug has existed for a long time.
* Fix DROP OPERATOR FAMILY IF EXISTS.Robert Haas2011-10-21
| | | | | | | | | | Essentially, the "IF EXISTS" portion was being ignored, and an error thrown anyway if the opfamily did not exist. I broke this in commit fd1843ff8979c0461fb3f1a9eab61140c977e32d; so backpatch to 9.1.X. Report and diagnosis by KaiGai Kohei.
* Throw a useful error message if an extension script file is fed to psql.Tom Lane2011-10-12
| | | | | | | | | | | | | | | | We have seen one too many reports of people trying to use 9.1 extension files in the old-fashioned way of sourcing them in psql. Not only does that usually not work (due to failure to substitute for MODULE_PATHNAME and/or @extschema@), but if it did work they'd get a collection of loose objects not an extension. To prevent this, insert an \echo ... \quit line that prints a suitable error message into each extension script file, and teach commands/extension.c to ignore lines starting with \echo. That should not only prevent any adverse consequences of loading a script file the wrong way, but make it crystal clear to users that they need to do it differently now. Tom Lane, following an idea of Andrew Dunstan's. Back-patch into 9.1 ... there is not going to be much value in this if we wait till 9.2.
* Improve and simplify CREATE EXTENSION's management of GUC variables.Tom Lane2011-10-05
| | | | | | | | | | | | | | | | | | | | | CREATE EXTENSION needs to transiently set search_path, as well as client_min_messages and log_min_messages. We were doing this by the expedient of saving the current string value of each variable, doing a SET LOCAL, and then doing another SET LOCAL with the previous value at the end of the command. This is a bit expensive though, and it also fails badly if there is anything funny about the existing search_path value, as seen in a recent report from Roger Niederland. Fortunately, there's a much better way, which is to piggyback on the GUC infrastructure previously developed for functions with SET options. We just open a new GUC nesting level, do our assignments with GUC_ACTION_SAVE, and then close the nesting level when done. This automatically restores the prior settings without a re-parsing pass, so (in principle anyway) there can't be an error. And guc.c still takes care of cleanup in event of an error abort. The CREATE EXTENSION code for this was modeled on some much older code in ri_triggers.c, which I also changed to use the better method, even though there wasn't really much risk of failure there. Also improve the comments in guc.c to reflect this additional usage.
* Fix typo in error message.Tom Lane2011-09-07
| | | | Per Euler Taveira de Oliveira.
* Avoid possibly accessing off the end of memory in examine_attribute().Tom Lane2011-09-06
| | | | | | | | | | | | | | Since the last couple of columns of pg_type are often NULL, sizeof(FormData_pg_type) can be an overestimate of the actual size of the tuple data part. Therefore memcpy'ing that much out of the catalog cache, as analyze.c was doing, poses a small risk of copying past the end of memory and incurring SIGSEGV. No such crash has been identified in the field, but we've certainly seen the equivalent happen in other code paths, so patch this one all the way back. Per valgrind testing by Noah Misch, though this is not his proposed patch. I chose to use SearchSysCacheCopy1 rather than inventing special-purpose infrastructure for copying only the minimal part of a pg_type tuple.
* Fix a missed case in code for "moving average" estimate of reltuples.Tom Lane2011-08-30
| | | | | | | | | | | | | | | | | | | | | | | | It is possible for VACUUM to scan no pages at all, if the visibility map shows that all pages are all-visible. In this situation VACUUM has no new information to report about the relation's tuple density, so it wasn't changing pg_class.reltuples ... but it updated pg_class.relpages anyway. That's wrong in general, since there is no evidence to justify changing the density ratio reltuples/relpages, but it's particularly bad if the previous state was relpages=reltuples=0, which means "unknown tuple density". We just replaced "unknown" with "zero". ANALYZE would eventually recover from this, but it could take a lot of repetitions of ANALYZE to do so if the relation size is much larger than the maximum number of pages ANALYZE will scan, because of the moving-average behavior introduced by commit b4b6923e03f4d29636a94f6f4cc2f5cf6298b8c8. The only known situation where we could have relpages=reltuples=0 and yet the visibility map asserts everything's visible is immediately following a pg_upgrade. It might be advisable for pg_upgrade to try to preserve the relpages/reltuples statistics; but in any case this code is wrong on its own terms, so fix it. Per report from Sergey Koposov. Back-patch to 8.4, where the visibility map was introduced, same as the previous change.
* Make CREATE EXTENSION check schema creation permissions.Tom Lane2011-08-23
| | | | | | | | | | | | When creating a new schema for a non-relocatable extension, we neglected to check whether the calling user has permission to create schemas. That didn't matter in the original coding, since we had already checked superuserness, but in the new dispensation where users need not be superusers, we should check it. Use CreateSchemaCommand() rather than calling NamespaceCreate() directly, so that we also enforce the rules about reserved schema names. Per complaint from KaiGai Kohei, though this isn't the same as his patch.
* Fix trigger WHEN conditions when both BEFORE and AFTER triggers exist.Tom Lane2011-08-21
| | | | | | | | | Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger fired for the same update. Fix by not trying to overload the use of estate->es_trig_tuple_slot. Per report from Yoran Heling. Back-patch to 9.0, when trigger WHEN conditions were introduced.
* Preserve toast value OIDs in toast-swap-by-content for CLUSTER/VACUUM FULL.Tom Lane2011-08-16
| | | | | | | | | | | | | | | | | This works around the problem that a catalog cache entry might contain a toast pointer that we try to dereference just as a VACUUM FULL completes on that catalog. We will see the sinval message on the cache entry when we acquire lock on the toast table, but by that point we've already told tuptoaster.c "here's the pointer to fetch", so it's difficult from a code structural standpoint to update the pointer before we use it. Much less painful to ensure that toast pointers are not invalidated in the first place. We have to add a bit of code to deal with the case that a value that previously wasn't toasted becomes so; but that should be a seldom-exercised corner case, so the inefficiency shouldn't be significant. Back-patch to 9.0. In prior versions, we didn't allow CLUSTER on system catalogs, and VACUUM FULL didn't result in reassignment of toast OIDs, so there was no problem.
* Fix unsafe order of operations in foreign-table DDL commands.Tom Lane2011-08-14
| | | | | | | | | | | | | | | | When updating or deleting a system catalog tuple, it's necessary to acquire RowExclusiveLock on the catalog before looking up the tuple; otherwise a concurrent VACUUM FULL on the catalog might move the tuple to a different TID before we can apply the update. Coding patterns that find the tuple via a table scan aren't at risk here, but when obtaining the tuple from a catalog cache, correct ordering is important; and several routines in foreigncmds.c got it wrong. Noted while running the regression tests in parallel with VACUUM FULL of assorted system catalogs. For consistency I moved all the heap_open calls to the starts of their functions, including a couple for which there was no actual bug. Back-patch to 8.4 where foreigncmds.c was added.
* Rethink behavior of CREATE OR REPLACE during CREATE EXTENSION.Tom Lane2011-07-23
| | | | | | | | | The original implementation simply did nothing when replacing an existing object during CREATE EXTENSION. The folly of this was exposed by a report from Marc Munro: if the existing object belongs to another extension, we are left in an inconsistent state. We should insist that the object does not belong to another extension, and then add it to the current extension if not already a member.
* Replace errdetail("%s", ...) with errdetail_internal("%s", ...).Tom Lane2011-07-16
| | | | | | There may be some other places where we should use errdetail_internal, but they'll have to be evaluated case-by-case. This commit just hits a bunch of places where invoking gettext is obviously a waste of cycles.
* Avoid listing ungrouped Vars in the targetlist of Agg-underneath-Window.Tom Lane2011-07-12
| | | | | | | | | | | | | | | | | | | | | | | | | | Regular aggregate functions in combination with, or within the arguments of, window functions are OK per spec; they have the semantics that the aggregate output rows are computed and then we run the window functions over that row set. (Thus, this combination is not really useful unless there's a GROUP BY so that more than one aggregate output row is possible.) The case without GROUP BY could fail, as recently reported by Jeff Davis, because sloppy construction of the Agg node's targetlist resulted in extra references to possibly-ungrouped Vars appearing outside the aggregate function calls themselves. See the added regression test case for an example. Fixing this requires modifying the API of flatten_tlist and its underlying function pull_var_clause. I chose to make pull_var_clause's API for aggregates identical to what it was already doing for placeholders, since the useful behaviors turn out to be the same (error, report node as-is, or recurse into it). I also tightened the error checking in this area a bit: if it was ever valid to see an uplevel Var, Aggref, or PlaceHolderVar here, that was a long time ago, so complain instead of ignoring them. Backpatch into 9.1. The failure exists in 8.4 and 9.0 as well, but seeing that it only occurs in a basically-useless corner case, it doesn't seem worth the risks of changing a function API in a minor release. There might be third-party code using pull_var_clause.
* Fix another oversight in logging of changes in postgresql.conf settings.Tom Lane2011-07-08
| | | | | | | | | | | | We were using GetConfigOption to collect the old value of each setting, overlooking the possibility that it didn't exist yet. This does happen in the case of adding a new entry within a custom variable class, as exhibited in bug #6097 from Maxim Boguk. To fix, add a missing_ok parameter to GetConfigOption, but only in 9.1 and HEAD --- it seems possible that some third-party code is using that function, so changing its API in a minor release would cause problems. In 9.0, create a near-duplicate function instead.
* Message style improvementsPeter Eisentraut2011-07-08
|
* Finish disabling reduced-lock-levels-for-DDL feature.Tom Lane2011-07-07
| | | | | Previous patch only covered the ALTER TABLE changes, not changes in other commands; and it neglected to revert the documentation changes.
* Call FDW validator functions even when the options list is empty.Tom Lane2011-07-05
| | | | | | | This is useful since a validator might want to require certain options to be provided. The passed array is an empty text array in this case. Per suggestion by Laurenz Albe, though this is not quite his patch.
* Message style tweaksPeter Eisentraut2011-07-05
|
* Reset ALTER TABLE lock levels to AccessExclusiveLock in all cases.Simon Riggs2011-07-04
| | | | | Locks on inheritance parent remain at lower level, as they were before. Remove entry from 9.1 release notes.
* Fix bugs in relpersistence handling during table creation.Robert Haas2011-07-03
| | | | | | | | | | | | | | | | | Unlike the relistemp field which it replaced, relpersistence must be set correctly quite early during the table creation process, as we rely on it quite early on for a number of purposes, including security checks. Normally, this is set based on whether the user enters CREATE TABLE, CREATE UNLOGGED TABLE, or CREATE TEMPORARY TABLE, but a relation may also be made implicitly temporary by creating it in pg_temp. This patch fixes the handling of that case, and also disables creation of unlogged tables in temporary tablespace (such table indeed skip WAL-logging, but we reject an explicit specification) and creation of relations in the temporary schemas of other sessions (which is not very sensible, and didn't work right anyway). Report by Amit Khandekar.
* Unify spelling of "canceled", "canceling", "cancellation"Peter Eisentraut2011-07-02
| | | | | We had previously (af26857a2775e7ceb0916155e931008c2116632f) established the U.S. spellings as standard.
* Message style and spelling improvementsPeter Eisentraut2011-06-22
|
* Fix thinko in previous patch to always update pg_class.reltuples/relpages.Tom Lane2011-06-19
| | | | | | I mis-simplified the test where ANALYZE decided if it could get away without doing anything: under the new regime, that's never allowed. Per bug #6068 from Jeff Janes. Back-patch to 8.4, just like previous patch.
* Rework parsing of ConstraintAttributeSpec to improve NOT VALID handling.Tom Lane2011-06-15
| | | | | | | | | | | The initial commit of the ALTER TABLE ADD FOREIGN KEY NOT VALID feature failed to support labeling such constraints as deferrable. The best fix for this seems to be to fold NOT VALID into ConstraintAttributeSpec. That's a bit more general than the documented syntax, but it allows better-targeted syntax error messages. In addition, do some mostly-but-not-entirely-cosmetic code review for the whole NOT VALID patch.
* Pgindent run before 9.1 beta2.Bruce Momjian2011-06-09
|