aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Fix concat_ws() to not insert a separator after leading NULL argument(s).Tom Lane2011-08-29
| | | | | Per bug #6181 from Itagaki Takahiro. Also do some marginal code cleanup and improve error handling.
* Improve spinlock performance for HP-UX, ia64, non-gcc.Robert Haas2011-08-29
| | | | | | | | | | | | | At least on this architecture, it's very important to spin on a non-atomic instruction and only retry the atomic once it appears that it will succeed. To fix this, split TAS() into two macros: TAS(), for trying to grab the lock the first time, and TAS_SPIN(), for spinning until we get it. TAS_SPIN() defaults to same as TAS(), but we can override it when we know there's a better way. It's likely that some of the other cases in s_lock.h require similar treatment, but this is the only one we've got conclusive evidence for at present.
* Allow more include files to be compiled in their own by adding missingBruce Momjian2011-08-27
| | | | | | include dependencies. Modify pgcompinclude to skip a common fcinfo error.
* Implement the information schema with_hierarchy columnPeter Eisentraut2011-08-27
| | | | | In PostgreSQL, this is included in the SELECT privilege, so show YES or NO depending on whether SELECT is granted.
* Add missing includes after pgrminclude run.Bruce Momjian2011-08-26
|
* Add markers for skips.Bruce Momjian2011-08-26
|
* Fix potential memory clobber in tsvector_concat().Tom Lane2011-08-26
| | | | | | | | | | | | | | | tsvector_concat() allocated its result workspace using the "conservative" estimate of the sum of the two input tsvectors' sizes. Unfortunately that wasn't so conservative as all that, because it supposed that the number of pad bytes required could not grow. Which it can, as per test case from Jesper Krogh, if there's a mix of lexemes with positions and lexemes without them in the input data. The fix is to assume that we might add a not-previously-present pad byte for each and every lexeme in the two inputs; which really is conservative, but it doesn't seem worthwhile to try to be more precise. This is an aboriginal bug in tsvector_concat, so back-patch to all versions containing it.
* Add makefile rules to check for backtracking in backend and psql lexers.Tom Lane2011-08-25
| | | | | Per discussion, we should enforce the policy of "no backtracking" in these performance-sensitive scanners.
* Add "%option warn" to all flex input files that lacked it.Tom Lane2011-08-25
| | | | | This is recommended in the flex manual, and there seems no good reason not to use it everywhere.
* Tweak postgresql.conf.sample's comments on listen_addresess.Robert Haas2011-08-25
| | | | | | | This makes it slightly more clear that '*' is not part of the default value, in case that wasn't obvious. As requested by Dougal Sutherland.
* Fix multiple bugs in extension dropping.Tom Lane2011-08-24
| | | | | | | | | | | | | | | | | | | | | | | | | When we implemented extensions, we made findDependentObjects() treat EXTENSION dependency links similarly to INTERNAL links. However, that logic contained an implicit assumption that an object could have at most one INTERNAL dependency, so it did not work correctly for objects having both INTERNAL and DEPENDENCY links. This led to failure to drop some extension member objects when dropping the extension. Furthermore, we'd never actually exercised the case of recursing to an internally-referenced (owning) object from anything other than a NORMAL dependency, and it turns out that passing the incoming dependency's flags to the owning object is the Wrong Thing. This led to sometimes dropping a whole extension silently when we should have rejected the drop command for lack of CASCADE. Since we obviously were under-testing extension drop scenarios, add some regression test cases. Unfortunately, such test cases require some extensions (duh), so we can't test for problems in the core regression tests. I chose to add them to the earthdistance contrib module, which is a good test case because it has a dependency on the cube contrib module. Back-patch to 9.1. Arguably these are pre-existing bugs in INTERNAL dependency handling, but since it appears that the cases can never arise pre-9.1, I'll refrain from back-patching the logic changes further than that.
* Make CREATE EXTENSION check schema creation permissions.Tom Lane2011-08-23
| | | | | | | | | | | | When creating a new schema for a non-relocatable extension, we neglected to check whether the calling user has permission to create schemas. That didn't matter in the original coding, since we had already checked superuserness, but in the new dispensation where users need not be superusers, we should check it. Use CreateSchemaCommand() rather than calling NamespaceCreate() directly, so that we also enforce the rules about reserved schema names. Per complaint from KaiGai Kohei, though this isn't the same as his patch.
* Fix overoptimistic assumptions in column width estimation for subqueries.Tom Lane2011-08-23
| | | | | | | | | | | | | | | | | | | | | | | set_append_rel_pathlist supposed that, while computing per-column width estimates for the appendrel, it could ignore child rels for which the translated reltargetlist entry wasn't a Var. This gave rise to completely silly estimates in some common cases, such as constant outputs from some or all of the arms of a UNION ALL. Instead, fall back on get_typavgwidth to estimate from the value's datatype; which might be a poor estimate but at least it's not completely wacko. That problem was exposed by an Assert in set_subquery_size_estimates, which unfortunately was still overoptimistic even with that fix, since we don't compute attr_widths estimates for appendrels that are entirely excluded by constraints. So remove the Assert; we'll just fall back on get_typavgwidth in such cases. Also, since set_subquery_size_estimates calls set_baserel_size_estimates which calls set_rel_width, there's no need for set_subquery_size_estimates to call get_typavgwidth; set_rel_width will handle it for us if we just leave the estimate set to zero. Remove the unnecessary code. Per report from Erik Rijkers and subsequent investigation.
* Use consistent format for reporting GetLastError()Peter Eisentraut2011-08-23
| | | | | | Use something like "error code %lu" for reporting GetLastError() values on Windows. Previously, a mix of different wordings and formats were in use.
* Typo fix.Robert Haas2011-08-22
|
* Fix handling of extension membership when filling in a shell operator.Tom Lane2011-08-22
| | | | | | | | | | | | The previous coding would result in deleting and not re-creating the extension membership pg_depend rows, since there was no CommandCounterIncrement that would allow recordDependencyOnCurrentExtension to see that the deletion had happened. Make it work like the shell type case, ie, keep the existing entries (and then throw an error if they're for the wrong extension). Per bug #6172 from Hitoshi Harada. Investigation and fix by Dimitri Fontaine.
* Fix trigger WHEN conditions when both BEFORE and AFTER triggers exist.Tom Lane2011-08-21
| | | | | | | | | Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger fired for the same update. Fix by not trying to overload the use of estate->es_trig_tuple_slot. Per report from Yoran Heling. Back-patch to 9.0, when trigger WHEN conditions were introduced.
* Fix performance problem when building a lossy tidbitmap.Tom Lane2011-08-20
| | | | | | | | | | | As pointed out by Sergey Koposov, repeated invocations of tbm_lossify can make building a large tidbitmap into an O(N^2) operation. To fix, make sure we remove more than the minimum amount of information per call, and add a fallback path to behave sanely if we're unable to fit the bitmap within the requested amount of memory. This has been wrong since the tidbitmap code was written, so back-patch to all supported branches.
* Make lazy_vacuum_rel call pg_rusage_init only if needed.Robert Haas2011-08-18
| | | | | | do_analyze_rel already does it this way. Euler Taveira de Oliveira
* Remove obsolete README file.Robert Haas2011-08-18
| | | | | | Perhaps we ought to add some other kind of documentation here instead, but for now let's get rid of this woefully obsolete description of the sinval machinery.
* Translation updatesPeter Eisentraut2011-08-17
|
* Fix comment about which version had BACKUP METHOD line in backup_lable, again.Heikki Linnakangas2011-08-17
| | | | It was invalidated again by Fujii's patch to 9.1.
* Revise sinval code to remove no-longer-used tuple TID from inval messages.Tom Lane2011-08-16
| | | | | | | | | | This requires adjusting the API for syscache callback functions: they now get a hash value, not a TID, to identify the target tuple. Most of them weren't paying any attention to that argument anyway, but plancache did require a small amount of fixing. Also, improve performance a trifle by avoiding sending duplicate inval messages when a heap_update isn't changing the catcache lookup columns.
* Forget about targeting catalog cache invalidations by tuple TID.Tom Lane2011-08-16
| | | | | | | | | | | | | | | | | | | | | | The TID isn't stable enough: we might queue an sinval event before a VACUUM FULL, and then process it afterwards, when the target tuple no longer has the same TID. So we must invalidate entries on the basis of hash value only. The old coding can be shown to result in various bizarre, hard-to-reproduce errors in the presence of concurrent VACUUM FULLs on system catalogs, and could easily result in permanent catalog corruption, up to and including complete loss of tables. This commit is just a minimal fix that removes the unsafe comparison. We should remove transmission of the tuple TID from sinval messages altogether, and then arrange to suppress the extra message in the common case of a heap_update that doesn't change the key hashvalue. But that's going to be much more invasive, and will only produce a probably-marginal performance gain, so it doesn't seem like material for a back-patch. Back-patch to 9.0. Before that, VACUUM FULL refused to do any tuple moving if it found any INSERT_IN_PROGRESS or DELETE_IN_PROGRESS tuples (and CLUSTER would give up altogether), so there was no risk of moving a tuple that might be the subject of an unsent sinval message.
* Fix incorrect order of operations during sinval reset processing.Tom Lane2011-08-16
| | | | | | | | | | | | | | | | | | | | | | | | | We have to be sure that we have revalidated each nailed-in-cache relcache entry before we try to use it to load data for some other relcache entry. The introduction of "mapped relations" in 9.0 broke this, because although we updated the state kept in relmapper.c early enough, we failed to propagate that information into relcache entries soon enough; in particular, we could try to fetch pg_class rows out of pg_class before we'd updated its relcache entry's rd_node.relNode value from the map. This bug accounts for Dave Gould's report of failures after "vacuum full pg_class", and I believe that there is risk for other system catalogs as well. The core part of the fix is to copy relmapper data into the relcache entries during "phase 1" in RelationCacheInvalidate(), before they'll be used in "phase 2". To try to future-proof the code against other similar bugs, I also rearranged the order in which nailed relations are visited during phase 2: now it's pg_class first, then pg_class_oid_index, then other nailed relations. This should ensure that RelationClearRelation can apply RelationReloadIndexInfo to all nailed indexes without risking use of not-yet-revalidated relcache entries. Back-patch to 9.0 where the relation mapper was introduced.
* Preserve toast value OIDs in toast-swap-by-content for CLUSTER/VACUUM FULL.Tom Lane2011-08-16
| | | | | | | | | | | | | | | | | This works around the problem that a catalog cache entry might contain a toast pointer that we try to dereference just as a VACUUM FULL completes on that catalog. We will see the sinval message on the cache entry when we acquire lock on the toast table, but by that point we've already told tuptoaster.c "here's the pointer to fetch", so it's difficult from a code structural standpoint to update the pointer before we use it. Much less painful to ensure that toast pointers are not invalidated in the first place. We have to add a bit of code to deal with the case that a value that previously wasn't toasted becomes so; but that should be a seldom-exercised corner case, so the inefficiency shouldn't be significant. Back-patch to 9.0. In prior versions, we didn't allow CLUSTER on system catalogs, and VACUUM FULL didn't result in reassignment of toast OIDs, so there was no problem.
* Fix race condition in relcache init file invalidation.Tom Lane2011-08-16
| | | | | | | | | | | | | | | | The previous code tried to synchronize by unlinking the init file twice, but that doesn't actually work: it leaves a window wherein a third process could read the already-stale init file but miss the SI messages that would tell it the data is stale. The result would be bizarre failures in catalog accesses, typically "could not read block 0 in file ..." later during startup. Instead, hold RelCacheInitLock across both the unlink and the sending of the SI messages. This is more straightforward, and might even be a bit faster since only one unlink call is needed. This has been wrong since it was put in (in 2002!), so back-patch to all supported releases.
* Fix bogus comment that claimed that the new BACKUP METHOD line inHeikki Linnakangas2011-08-16
| | | | backup_label was new in 9.0. Spotted by Fujii Masao.
* Add "Reason code" prefix to internal SSI error messagesPeter Eisentraut2011-08-15
| | | | | | | | | | This makes it clearer that the error message is perhaps not supposed to be understood by users, and it also makes it somewhat clearer that it was not accidentally omitted from translation. Idea from Heikki Linnakangas, except that we don't mark "Reason code" for translation at this point, because that would make the implementation too cumbersome.
* Fix unsafe order of operations in foreign-table DDL commands.Tom Lane2011-08-14
| | | | | | | | | | | | | | | | When updating or deleting a system catalog tuple, it's necessary to acquire RowExclusiveLock on the catalog before looking up the tuple; otherwise a concurrent VACUUM FULL on the catalog might move the tuple to a different TID before we can apply the update. Coding patterns that find the tuple via a table scan aren't at risk here, but when obtaining the tuple from a catalog cache, correct ordering is important; and several routines in foreigncmds.c got it wrong. Noted while running the regression tests in parallel with VACUUM FULL of assorted system catalogs. For consistency I moved all the heap_open calls to the starts of their functions, including a couple for which there was no actual bug. Back-patch to 8.4 where foreigncmds.c was added.
* Fix incorrect timeout handling during initial authentication transaction.Tom Lane2011-08-13
| | | | | | | | | | | | | | | | | | | | | | The statement start timestamp was not set before initiating the transaction that is used to look up client authentication information in pg_authid. In consequence, enable_sig_alarm computed a wrong value (far in the past) for statement_fin_time. That didn't have any immediate effect, because the timeout alarm was set without reference to statement_fin_time; but if we subsequently blocked on a lock for a short time, CheckStatementTimeout would consult the bogus value when we cancelled the lock timeout wait, and then conclude we'd timed out, leading to immediate failure of the connection attempt. Thus an innocent "vacuum full pg_authid" would cause failures of concurrent connection attempts. Noted while testing other, more serious consequences of vacuum full on system catalogs. We should set the statement timestamp before StartTransactionCommand(), so that the transaction start timestamp is also valid. I'm not sure if there are any non-cosmetic effects of it not being valid, but the xact timestamp is at least sent to the statistics machinery. Back-patch to 9.0. Before that, the client authentication timeout was done outside any transaction and did not depend on this state to be valid.
* Teach unix_latch.c to use poll() where available.Tom Lane2011-08-11
| | | | | | | | | poll() is preferred over select() on platforms where both are available, because it tends to be a bit faster and it doesn't have an arbitrary limit on the range of FD numbers that can be accessed. The FD range limit does not appear to be a risk factor for any 9.1 usages, so this doesn't need to be back-patched, but we need to have it in place if we keep on expanding the uses of WaitLatch.
* Unbreak legacy syntax "COMMENT ON RULE x IS y", with no relation name.Robert Haas2011-08-11
| | | | | check_object_ownership() isn't happy about the null relation pointer. We could fix it there, but this seems more future-proof.
* Remove wal_sender_delay GUC, because it's no longer useful.Tom Lane2011-08-10
| | | | | | | | | | | | | | The latch infrastructure is now capable of detecting all cases where the walsender loop needs to wake up, so there is no reason to have an arbitrary timeout. Also, modify the walsender loop logic to follow the standard pattern of ResetLatch, test for work to do, WaitLatch. The previous coding was both hard to follow and buggy: it would sometimes busy-loop despite having nothing available to do, eg between receipt of a signal and the next time it was caught up with new WAL, and it also had interesting choices like deciding to update to WALSNDSTATE_STREAMING on the strength of information known to be obsolete.
* Add a bit of debug logging to backend_read_statsfile().Tom Lane2011-08-10
| | | | | | | | This is in hopes of learning more about what causes "pgstat wait timeout" warnings in the buildfarm. This patch should probably be reverted once we've learned what we can. As coded, it will result in regression test "failures" at half the delay that the existing code does, so I expect to see a few more than before.
* Change the autovacuum launcher to use WaitLatch instead of a poll loop.Tom Lane2011-08-10
| | | | | | | | | | | | | | | | | In pursuit of this (and with the expectation that WaitLatch will be needed in more places), convert the latch field that was already added to PGPROC for sync rep into a generic latch that is activated for all PGPROC-owning processes, and change many of the standard backend signal handlers to set that latch when a signal happens. This will allow WaitLatch callers to be wakened properly by these signals. In passing, fix a whole bunch of signal handlers that had been hacked to do things that might change errno, without adding the necessary save/restore logic for errno. Also make some minor fixes in unix_latch.c, and clean up bizarre and unsafe scheme for disowning the process's latch. Much of this has to be back-patched into 9.1. Peter Geoghegan, with additional work by Tom
* If backup-end record is not seen, and we reach end of recovery from aHeikki Linnakangas2011-08-10
| | | | | | | | | | | | | | | | streamed backup, throw an error and refuse to start up. The restore has not finished correctly in that case and the data directory is possibly corrupt. We already errored out in case of archive recovery, but could not during crash recovery because we couldn't distinguish between the case that pg_start_backup() was called and the database then crashed (must not error, data is OK), and the case that we're restoring from a backup and not all the needed WAL was replayed (data can be corrupt). To distinguish those cases, add a line to backup_label to indicate whether the backup was taken with pg_start/stop_backup(), or by streaming (ie. pg_basebackup). This requires re-initdb, because of a new field added to the control file.
* Measure WaitLatch's timeout parameter in milliseconds, not microseconds.Tom Lane2011-08-09
| | | | | | | | | | | | The original definition had the problem that timeouts exceeding about 2100 seconds couldn't be specified on 32-bit machines. Milliseconds seem like sufficient resolution, and finer grain than that would be fantasy anyway on many platforms. Back-patch to 9.1 so that this aspect of the latch API won't change between 9.1 and later releases. Peter Geoghegan
* Documentation improvement and minor code cleanups for the latch facility.Tom Lane2011-08-09
| | | | | | | | | | | | | | Improve the documentation around weak-memory-ordering risks, and do a pass of general editorialization on the comments in the latch code. Make the Windows latch code more like the Unix latch code where feasible; in particular provide the same Assert checks in both implementations. Fix poorly-placed WaitLatch call in syncrep.c. This patch resolves, for the moment, concerns around weak-memory-ordering bugs in latch-related code: we have documented the restrictions and checked that existing calls meet them. In 9.2 I hope that we will install suitable memory barrier instructions in SetLatch/ResetLatch, so that their callers don't need to be quite so careful.
* Avoid creating PlaceHolderVars immediately within PlaceHolderVars.Tom Lane2011-08-09
| | | | | | | | | Such a construction is useless since the lower PlaceHolderVar is already nullable; no need to make it more so. Noted while pursuing bug #6154. This is just a minor planner efficiency improvement, since the final plan will come out the same anyway after PHVs are flattened. So not worth the risk of back-patching.
* Use clearer notation for getnameinfo() return handlingPeter Eisentraut2011-08-09
| | | | | | | | | | | | | | Writing if (getnameinfo(...)) handle_error(); reads quite strangely, so use something like if (getnameinfo(...) != 0) handle_error(); instead.
* Change the way string relopts are allocated.Heikki Linnakangas2011-08-09
| | | | | | | | | | | | Don't try to allocate the default value for a string relopt in the same palloc chunk as the relopt_string struct. That didn't work too well if you added a built-in string relopt in the stringRelOpts array, as it's not possible to have an initializer for a variable length struct in C. This makes the code slightly simpler too. While we're at it, move the call to validator function in add_string_reloption to before the allocation, so that if someone does pass a bogus default value, we don't leak memory.
* Fix grammar and spelling in log message.Heikki Linnakangas2011-08-09
|
* Fix nested PlaceHolderVar expressions that appear only in targetlists.Tom Lane2011-08-09
| | | | | | | | | | | | | | | | | | A PlaceHolderVar's expression might contain another, lower-level PlaceHolderVar. If the outer PlaceHolderVar is used, the inner one certainly will be also, and so we have to make sure that both of them get into the placeholder_list with correct ph_may_need values during the initial pre-scan of the query (before deconstruct_jointree starts). We did this correctly for PlaceHolderVars appearing in the query quals, but overlooked the issue for those appearing in the top-level targetlist; with the result that nested placeholders referenced only in the targetlist did not work correctly, as illustrated in bug #6154. While at it, add some error checking to find_placeholder_info to ensure that we don't try to create new placeholders after it's too late to do so; they have to all be created before deconstruct_jointree starts. Back-patch to 8.4 where the PlaceHolderVar mechanism was introduced.
* Clean up ill-advised attempt to invent a private set of Node tags.Tom Lane2011-08-06
| | | | | | | | | | | Somebody thought it'd be cute to invent a set of Node tag numbers that were defined independently of, and indeed conflicting with, the main tag-number list. While this accidentally failed to fail so far, it would certainly lead to trouble as soon as anyone wanted to, say, apply copyObject to these node types. Clang was already complaining about the use of makeNode on these tags, and I think quite rightly so. Fix by pushing these node definitions into the mainstream, including putting replnodes.h where it belongs.
* Reduce PG_SYSLOG_LIMIT to 900 bytes.Tom Lane2011-08-05
| | | | | | | | | | | | | The previous limit of 1024 was set on the assumption that all modern syslog implementations have line length limits of 2KB or so. However, this is false, as at least Solaris and sysklogd truncate at only 1KB. 900 seems to leave enough room for the max likely length of the tacked-on prefixes, so let's go with that. As with the previous change, it doesn't seem wise to back-patch this into already-released branches; but it should be OK to sneak it into 9.1. Noah Misch
* Allow per-column foreign data wrapper options.Robert Haas2011-08-05
| | | | Shigeru Hanada, with fairly minor editing by me.
* Create VXID locks "lazily" in the main lock table.Robert Haas2011-08-04
| | | | | | | | | | | | | Instead of entering them on transaction startup, we materialize them only when someone wants to wait, which will occur only during CREATE INDEX CONCURRENTLY. In Hot Standby mode, the startup process must also be able to probe for conflicting VXID locks, but the lock need never be fully materialized, because the startup process does not use the normal lock wait mechanism. Since most VXID locks never need to touch the lock manager partition locks, this can significantly reduce blocking contention on read-heavy workloads. Patch by me. Review by Jeff Davis.
* Make pgbench use erand48() rather than random().Robert Haas2011-08-03
| | | | | | | | | | | | | | | | | | glibc renders random() thread-safe by wrapping a futex lock around it; testing reveals that this limits the performance of pgbench on machines with many CPU cores. Rather than switching to random_r(), which is only available on GNU systems and crashes unless you use undocumented alchemy to initialize the random state properly, switch to our built-in implementation of erand48(), which is both thread-safe and concurrent. Since the list of reasons not to use the operating system's erand48() is getting rather long, rename ours to pg_erand48() (and similarly for our implementations of lrand48() and srand48()) and just always use those. We were already doing this on Cygwin anyway, and the glibc implementation is not quite thread-safe, so pgbench wouldn't be able to use that either. Per discussion with Tom Lane.
* Move CheckRecoveryConflictDeadlock() call to a safer place.Tom Lane2011-08-02
| | | | | | | | | | | | | This kluge was inserted in a spot apparently chosen at random: the lock manager's state is not yet fully set up for the wait, and in particular LockWaitCancel hasn't been armed by setting lockAwaited, so the ProcLock will not get cleaned up if the ereport is thrown. This seems to not cause any observable problem in trivial test cases, because LockReleaseAll will silently clean up the debris; but I was able to cause failures with tests involving subtransactions. Fixes breakage induced by commit c85c941470efc44494fd7a5f426ee85fc65c268c. Back-patch to all affected branches.