aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* libpq compiles various pgport files like ecpg does, and needs similar MakefileHeikki Linnakangas2011-09-01
| | | | | changes for the win32 setlocale() wrapper I put into ecpg, to make it compile on MinGW.
* In ecpglib restore LC_NUMERIC in case of an error.Michael Meskes2011-09-01
|
* Fix MinGW build, broken by my previous patch to add a setlocale() wrapperHeikki Linnakangas2011-09-01
| | | | | | | on Windows. ecpglib doesn't link with libpgport, but picks and compiles the .c files it needs individually. To cope with that, move the setlocale() wrapper from chklocale.c to a separate setlocale.c file, and include that in ecpglib.
* setlocale() on Windows doesn't work correctly if the locale name containsHeikki Linnakangas2011-09-01
| | | | | | | | | | | | | | | | | | | | | | dots. I previously worked around this in initdb, mapping the known problematic locale names to aliases that work, but Hiroshi Inoue pointed out that that's not enough because even if you use one of the aliases, like "Chinese_HKG", setlocale(LC_CTYPE, NULL) returns back the long form, ie. "Chinese_Hong Kong S.A.R.". When we try to restore an old locale value by passing that value back to setlocale(), it fails. Note that you are affected by this bug also if you use one of those short-form names manually, so just reverting the hack in initdb won't fix it. To work around that, move the locale name mapping from initdb to a wrapper around setlocale(), so that the mapping is invoked on every setlocale() call. Also, add a few checks for failed setlocale() calls in the backend. These calls shouldn't fail, and if they do there isn't much we can do about it, but at least you'll get a warning. Backpatch to 9.1, where the initdb hack was introduced. The Windows bug affects older versions too if you set locale manually to one of the aliases, but given the lack of complaints from the field, I'm hesitent to backpatch.
* Move the line to undefine setlocale() macro on Win32 outside USE_REPL_SNPRINTFHeikki Linnakangas2011-09-01
| | | | | | ifdef block. It has nothing to do with whether the replacement snprintf function is used. It caused no live bug, because the replacement snprintf function is always used on Win32, but it was nevertheless misplaced.
* Further repair of eqjoinsel ndistinct-clamping logic.Tom Lane2011-09-01
| | | | | | | | | | | | | | | | | | | | | Examination of examples provided by Mark Kirkwood and others has convinced me that actually commit 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 was quite a few bricks shy of a load. The useful part of that patch was clamping ndistinct for the inner side of a semi or anti join, and the reason why that's needed is that it's the only way that restriction clauses eliminating rows from the inner relation can affect the estimated size of the join result. I had not clearly understood why the clamping was appropriate, and so mis-extrapolated to conclude that we should clamp ndistinct for the outer side too, as well as for both sides of regular joins. These latter actions were all wrong, and are reverted with this patch. In addition, the clamping logic is now made to affect the behavior of both paths in eqjoinsel_semi, with or without MCV lists to compare. When we have MCVs, we suppose that the most common values are the ones that are most likely to survive the decimation resulting from a lower restriction clause, so we think of the clamping as eliminating non-MCV values, or potentially even the least-common MCVs for the inner relation. Back-patch to 8.4, same as previous fixes in this area.
* Fix pg_upgrade to preserve toast relfrozenxids for old 8.3 servers.Bruce Momjian2011-08-31
| | | | | | | This fixes a pg_upgrade bug that could lead to query errors when clog files are improperly removed. Backpatch to 8.4, 9.0, 9.1.
* Improve eqjoinsel's ndistinct clamping to work for multiple levels of join.Tom Lane2011-08-31
| | | | | | | | | | | | | | | | | This patch fixes an oversight in my commit 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 of 2008-10-23. That patch accounted for baserel restriction clauses that reduced the number of rows coming out of a table (and hence the number of possibly-distinct values of a join variable), but not for join restriction clauses that might have been applied at a lower level of join. To account for the latter, look up the sizes of the min_lefthand and min_righthand inputs of the current join, and clamp with those in the same way as for the base relations. Noted while investigating a complaint from Ben Chobot, although this in itself doesn't seem to explain his report. Back-patch to 8.4; previous versions used different estimation methods for which this heuristic isn't relevant.
* Fix a missed case in code for "moving average" estimate of reltuples.Tom Lane2011-08-30
| | | | | | | | | | | | | | | | | | | | | | | | It is possible for VACUUM to scan no pages at all, if the visibility map shows that all pages are all-visible. In this situation VACUUM has no new information to report about the relation's tuple density, so it wasn't changing pg_class.reltuples ... but it updated pg_class.relpages anyway. That's wrong in general, since there is no evidence to justify changing the density ratio reltuples/relpages, but it's particularly bad if the previous state was relpages=reltuples=0, which means "unknown tuple density". We just replaced "unknown" with "zero". ANALYZE would eventually recover from this, but it could take a lot of repetitions of ANALYZE to do so if the relation size is much larger than the maximum number of pages ANALYZE will scan, because of the moving-average behavior introduced by commit b4b6923e03f4d29636a94f6f4cc2f5cf6298b8c8. The only known situation where we could have relpages=reltuples=0 and yet the visibility map asserts everything's visible is immediately following a pg_upgrade. It might be advisable for pg_upgrade to try to preserve the relpages/reltuples statistics; but in any case this code is wrong on its own terms, so fix it. Per report from Sergey Koposov. Back-patch to 8.4, where the visibility map was introduced, same as the previous change.
* Fix concat_ws() to not insert a separator after leading NULL argument(s).Tom Lane2011-08-29
| | | | | Per bug #6181 from Itagaki Takahiro. Also do some marginal code cleanup and improve error handling.
* Actually, all of parallel restore's limitations should be tested earlier.Tom Lane2011-08-28
| | | | | | | | | On closer inspection, whining in restore_toc_entries_parallel is really much too late for any user-facing error case. The right place to do it is at the start of RestoreArchive(), before we've done anything interesting (suh as trying to DROP all the targets ...) Back-patch to 8.4, where parallel restore was introduced.
* Be more user-friendly about unsupported cases for parallel pg_restore.Tom Lane2011-08-28
| | | | | | | | | | | If we are unable to do a parallel restore because the input file is stdin or is otherwise unseekable, we should complain and fail immediately, not after having done some of the restore. Complaining once per thread isn't so cool either, and the messages should be worded to make it clear this is an unsupported case not some weird race-condition bug. Per complaint from Lonni Friedman. Back-patch to 8.4, where parallel restore was introduced.
* Don't assume that "E" response to NEGOTIATE_SSL_CODE means pre-7.0 server.Tom Lane2011-08-27
| | | | | | | | | | | | | | | | | | | | | | | | | These days, such a response is far more likely to signify a server-side problem, such as fork failure. Reporting "server does not support SSL" (in sslmode=require) could be quite misleading. But the results could be even worse in sslmode=prefer: if the problem was transient and the next connection attempt succeeds, we'll have silently fallen back to protocol version 2.0, possibly disabling features the user needs. Hence, it seems best to just eliminate the assumption that backing off to non-SSL/2.0 protocol is the way to recover from an "E" response, and instead treat the server error the same as we would in non-SSL cases. I tested this change against a pre-7.0 server, and found that there was a second logic bug in the "prefer" path: the test to decide whether to make a fallback connection attempt assumed that we must have opened conn->ssl, which in fact does not happen given an "E" response. After fixing that, the code does indeed connect successfully to pre-7.0, as long as you didn't set sslmode=require. (If you did, you get "Unsupported frontend protocol", which isn't completely off base given the server certainly doesn't support SSL.) Since there seems no reason to believe that pre-7.0 servers exist anymore in the wild, back-patch to all supported branches.
* Ensure we discard unread/unsent data when abandoning a connection attempt.Tom Lane2011-08-27
| | | | | | | | | | | | | | | | | There are assorted situations wherein PQconnectPoll() will abandon a connection attempt and try again with different parameters (eg, SSL versus not SSL). However, the code forgot to discard any pending data in libpq's I/O buffers when doing this. In at least one case (server returns E message during SSL negotiation), there is unread input data which bollixes the next connection attempt. I have not checked to see whether this is possible in the other cases where we close the socket and retry, but it seems like a matter of good defensive programming to add explicit buffer-flushing code to all of them. This is one of several issues exposed by Daniel Farina's report of misbehavior after a server-side fork failure. This has been wrong since forever, so back-patch to all supported branches.
* Fix potential memory clobber in tsvector_concat().Tom Lane2011-08-26
| | | | | | | | | | | | | | | tsvector_concat() allocated its result workspace using the "conservative" estimate of the sum of the two input tsvectors' sizes. Unfortunately that wasn't so conservative as all that, because it supposed that the number of pad bytes required could not grow. Which it can, as per test case from Jesper Krogh, if there's a mix of lexemes with positions and lexemes without them in the input data. The fix is to assume that we might add a not-previously-present pad byte for each and every lexeme in the two inputs; which really is conservative, but it doesn't seem worthwhile to try to be more precise. This is an aboriginal bug in tsvector_concat, so back-patch to all versions containing it.
* Add expected isolationtester output when prepared xacts are disabledAlvaro Herrera2011-08-25
| | | | | | | | This was deemed unnecessary initially but in later discussion it was agreed otherwise. Original file from Kevin Grittner, allegedly from Dan Ports. I had to clean up whitespace a bit per changes from Heikki.
* Fix psql lexer to avoid use of backtracking.Tom Lane2011-08-25
| | | | | | | | | | Per previous experimentation, backtracking slows down lexing performance significantly (by about a third). It's usually pretty easy to avoid, just need to have rules that accept an incomplete construct and do whatever the lexer would have done otherwise. The backtracking was introduced by the patch that added quoted variable substitution. Back-patch to 9.0 where that was added.
* Properly quote SQL/MED generic options in pg_dump output.Robert Haas2011-08-25
| | | | Shigeru Hanada
* Revert "Tweak postgresql.conf.sample's comments on listen_addresess."Robert Haas2011-08-25
| | | | | This reverts commit 1bde67c0b9adce8b7ed2a2d1fcb2788cf96cea64, which should have been done only on the master branch.
* Tweak postgresql.conf.sample's comments on listen_addresess.Robert Haas2011-08-25
| | | | | | | This makes it slightly more clear that '*' is not part of the default value, in case that wasn't obvious. As requested by Dougal Sutherland.
* Fix pgxs.mk to always add --dbname=$(CONTRIB_TESTDB) to REGRESS_OPTS.Tom Lane2011-08-24
| | | | | | | | | | | | | | The previous coding resulted in contrib modules unintentionally overriding the use of CONTRIB_TESTDB. There seems no particularly good reason to allow that (after all, the makefile can set CONTRIB_TESTDB if that's really what it intends). In passing, document REGRESS_OPTS where the other pgxs.mk options are documented. Back-patch to 9.1 --- in prior versions, there were no cases of contrib modules setting REGRESS_OPTS without including the --dbname switch, so while the coding was fragile there was no actual bug.
* Fix multiple bugs in extension dropping.Tom Lane2011-08-24
| | | | | | | | | | | | | | | | | | | | | | | | | When we implemented extensions, we made findDependentObjects() treat EXTENSION dependency links similarly to INTERNAL links. However, that logic contained an implicit assumption that an object could have at most one INTERNAL dependency, so it did not work correctly for objects having both INTERNAL and DEPENDENCY links. This led to failure to drop some extension member objects when dropping the extension. Furthermore, we'd never actually exercised the case of recursing to an internally-referenced (owning) object from anything other than a NORMAL dependency, and it turns out that passing the incoming dependency's flags to the owning object is the Wrong Thing. This led to sometimes dropping a whole extension silently when we should have rejected the drop command for lack of CASCADE. Since we obviously were under-testing extension drop scenarios, add some regression test cases. Unfortunately, such test cases require some extensions (duh), so we can't test for problems in the core regression tests. I chose to add them to the earthdistance contrib module, which is a good test case because it has a dependency on the cube contrib module. Back-patch to 9.1. Arguably these are pre-existing bugs in INTERNAL dependency handling, but since it appears that the cases can never arise pre-9.1, I'll refrain from back-patching the logic changes further than that.
* Make CREATE EXTENSION check schema creation permissions.Tom Lane2011-08-23
| | | | | | | | | | | | When creating a new schema for a non-relocatable extension, we neglected to check whether the calling user has permission to create schemas. That didn't matter in the original coding, since we had already checked superuserness, but in the new dispensation where users need not be superusers, we should check it. Use CreateSchemaCommand() rather than calling NamespaceCreate() directly, so that we also enforce the rules about reserved schema names. Per complaint from KaiGai Kohei, though this isn't the same as his patch.
* Fix overoptimistic assumptions in column width estimation for subqueries.Tom Lane2011-08-23
| | | | | | | | | | | | | | | | | | | | | | | set_append_rel_pathlist supposed that, while computing per-column width estimates for the appendrel, it could ignore child rels for which the translated reltargetlist entry wasn't a Var. This gave rise to completely silly estimates in some common cases, such as constant outputs from some or all of the arms of a UNION ALL. Instead, fall back on get_typavgwidth to estimate from the value's datatype; which might be a poor estimate but at least it's not completely wacko. That problem was exposed by an Assert in set_subquery_size_estimates, which unfortunately was still overoptimistic even with that fix, since we don't compute attr_widths estimates for appendrels that are entirely excluded by constraints. So remove the Assert; we'll just fall back on get_typavgwidth in such cases. Also, since set_subquery_size_estimates calls set_baserel_size_estimates which calls set_rel_width, there's no need for set_subquery_size_estimates to call get_typavgwidth; set_rel_width will handle it for us if we just leave the estimate set to zero. Remove the unnecessary code. Per report from Erik Rijkers and subsequent investigation.
* Fix handling of extension membership when filling in a shell operator.Tom Lane2011-08-22
| | | | | | | | | | | | The previous coding would result in deleting and not re-creating the extension membership pg_depend rows, since there was no CommandCounterIncrement that would allow recordDependencyOnCurrentExtension to see that the deletion had happened. Make it work like the shell type case, ie, keep the existing entries (and then throw an error if they're for the wrong extension). Per bug #6172 from Hitoshi Harada. Investigation and fix by Dimitri Fontaine.
* Fix trigger WHEN conditions when both BEFORE and AFTER triggers exist.Tom Lane2011-08-21
| | | | | | | | | Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger fired for the same update. Fix by not trying to overload the use of estate->es_trig_tuple_slot. Per report from Yoran Heling. Back-patch to 9.0, when trigger WHEN conditions were introduced.
* Fix performance problem when building a lossy tidbitmap.Tom Lane2011-08-20
| | | | | | | | | | | As pointed out by Sergey Koposov, repeated invocations of tbm_lossify can make building a large tidbitmap into an O(N^2) operation. To fix, make sure we remove more than the minimum amount of information per call, and add a fallback path to behave sanely if we're unable to fit the bitmap within the requested amount of memory. This has been wrong since the tidbitmap code was written, so back-patch to all supported branches.
* Tag 9.1rc1.REL9_1_RC1Tom Lane2011-08-18
|
* Explain max_prepared_transactions requirement in isolation tests' README.Tom Lane2011-08-18
| | | | | | | | Now that we have a test that requires nondefault settings to pass, it seems like we'd better mention that detail in the directions about how to run the tests. Also do some very minor copy-editing.
* Report libpq errors correctly if session setup or teardown steps fail inHeikki Linnakangas2011-08-18
| | | | | | | | isolation regression tests. Alvaro committed these fixes to master branch on Tue Jul 29th, as part of Noah Misch's patch. The rest of that patch is not needed on 9.1, but this part should be backpatched.
* Add an SSI regression test that tests all interesting permutations in theHeikki Linnakangas2011-08-18
| | | | | | | | | | | | | order of begin, prepare, and commit of three concurrent transactions that have conflicts between them. The test runs for a quite long time, and the expected output file is huge, but this test caught some serious bugs during development, so seems worthwhile to keep. The test uses prepared transactions, so it fails if the server has max_prepared_transactions=0. Because of that, it's marked as "ignore" in the schedule file. Dan Ports
* Strip whitespace from SQL blocks in the isolation test suite. This is purelyHeikki Linnakangas2011-08-18
| | | | cosmetic, it removes a lot of IMHO ugly whitespace from the expected output.
* Change PyInit_plpy to external linkagePeter Eisentraut2011-08-18
| | | | | | | Module initialization functions in Python 3 must have external linkage, because PyMODINIT_FUNC does dllexport on Windows-like platforms. Without this change, the build with Python 3 fails on Windows.
* Fix two issues in plpython's handling of composite results.Tom Lane2011-08-17
| | | | | | | | Dropped columns within a composite type were not handled correctly. Also, we did not check for whether a composite result type had changed since we cached the information about it. Jan UrbaƄski, per a bug report from Jean-Baptiste Quenot
* Properly handle empty arrays returned from plperl functions.Andrew Dunstan2011-08-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bug reported by David Wheeler, fix by Alex Hunsaker. # Please enter the commit message for your changes. Lines starting # with '#' will be ignored, and an empty message aborts the commit. # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: src/pl/plperl/plperl.c # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # autom4te.cache/ # configure.in~ # doc/src/sgml/ref/grant.sgml~ # src/backend/port/win32_latch.c~ # src/bin/psql/command.c~ # src/include/pg_config.h.win32~ # src/pl/plpython/plpython.c~ # src/tools/msvc/pgbison.bat~ # src/tools/msvc/pgbison.pl.bak # src/tools/msvc/pgflex.bat~ # src/tools/msvc/pgflex.pl.bak # src/tools/pgindent/README~ # src/tools/pgindent/pgindent.pl # src/tools/pgindent/pgindent.pl~ # xxxxx # yyyyyy
* Translation updatesPeter Eisentraut2011-08-17
|
* If backup-end record is not seen, and we reach end of recovery from aHeikki Linnakangas2011-08-17
| | | | | | | | | | | | | | | | | | | streamed backup, throw an error and refuse to start up. The restore has not finished correctly in that case and the data directory is possibly corrupt. We already errored out in case of archive recovery, but could not during crash recovery because we couldn't distinguish between the case that pg_start_backup() was called and the database then crashed (must not error, data is OK), and the case that we're restoring from a backup and not all the needed WAL was replayed (data can be corrupt). To distinguish those cases, add a line to backup_label to indicate whether the backup was taken with pg_start/stop_backup(), or by streaming (ie. pg_basebackup). This is a different implementation than what I committed to 9.2 a week ago. That implementation was not back-patchable because it required re-initdb. Fujii Masao
* Move \r out of translatable stringsPeter Eisentraut2011-08-17
| | | | | The translation tools are very unhappy about seeing \r in translatable strings, so move it to a separate fprintf call.
* Forget about targeting catalog cache invalidations by tuple TID.Tom Lane2011-08-16
| | | | | | | | | | | | | | | | | | | | | | The TID isn't stable enough: we might queue an sinval event before a VACUUM FULL, and then process it afterwards, when the target tuple no longer has the same TID. So we must invalidate entries on the basis of hash value only. The old coding can be shown to result in various bizarre, hard-to-reproduce errors in the presence of concurrent VACUUM FULLs on system catalogs, and could easily result in permanent catalog corruption, up to and including complete loss of tables. This commit is just a minimal fix that removes the unsafe comparison. We should remove transmission of the tuple TID from sinval messages altogether, and then arrange to suppress the extra message in the common case of a heap_update that doesn't change the key hashvalue. But that's going to be much more invasive, and will only produce a probably-marginal performance gain, so it doesn't seem like material for a back-patch. Back-patch to 9.0. Before that, VACUUM FULL refused to do any tuple moving if it found any INSERT_IN_PROGRESS or DELETE_IN_PROGRESS tuples (and CLUSTER would give up altogether), so there was no risk of moving a tuple that might be the subject of an unsent sinval message.
* Fix incorrect order of operations during sinval reset processing.Tom Lane2011-08-16
| | | | | | | | | | | | | | | | | | | | | | | | | We have to be sure that we have revalidated each nailed-in-cache relcache entry before we try to use it to load data for some other relcache entry. The introduction of "mapped relations" in 9.0 broke this, because although we updated the state kept in relmapper.c early enough, we failed to propagate that information into relcache entries soon enough; in particular, we could try to fetch pg_class rows out of pg_class before we'd updated its relcache entry's rd_node.relNode value from the map. This bug accounts for Dave Gould's report of failures after "vacuum full pg_class", and I believe that there is risk for other system catalogs as well. The core part of the fix is to copy relmapper data into the relcache entries during "phase 1" in RelationCacheInvalidate(), before they'll be used in "phase 2". To try to future-proof the code against other similar bugs, I also rearranged the order in which nailed relations are visited during phase 2: now it's pg_class first, then pg_class_oid_index, then other nailed relations. This should ensure that RelationClearRelation can apply RelationReloadIndexInfo to all nailed indexes without risking use of not-yet-revalidated relcache entries. Back-patch to 9.0 where the relation mapper was introduced.
* Preserve toast value OIDs in toast-swap-by-content for CLUSTER/VACUUM FULL.Tom Lane2011-08-16
| | | | | | | | | | | | | | | | | This works around the problem that a catalog cache entry might contain a toast pointer that we try to dereference just as a VACUUM FULL completes on that catalog. We will see the sinval message on the cache entry when we acquire lock on the toast table, but by that point we've already told tuptoaster.c "here's the pointer to fetch", so it's difficult from a code structural standpoint to update the pointer before we use it. Much less painful to ensure that toast pointers are not invalidated in the first place. We have to add a bit of code to deal with the case that a value that previously wasn't toasted becomes so; but that should be a seldom-exercised corner case, so the inefficiency shouldn't be significant. Back-patch to 9.0. In prior versions, we didn't allow CLUSTER on system catalogs, and VACUUM FULL didn't result in reassignment of toast OIDs, so there was no problem.
* Fix race condition in relcache init file invalidation.Tom Lane2011-08-16
| | | | | | | | | | | | | | | | The previous code tried to synchronize by unlinking the init file twice, but that doesn't actually work: it leaves a window wherein a third process could read the already-stale init file but miss the SI messages that would tell it the data is stale. The result would be bizarre failures in catalog accesses, typically "could not read block 0 in file ..." later during startup. Instead, hold RelCacheInitLock across both the unlink and the sending of the SI messages. This is more straightforward, and might even be a bit faster since only one unlink call is needed. This has been wrong since it was put in (in 2002!), so back-patch to all supported releases.
* Adjust total size in pg_basebackup progress report when reality changesMagnus Hagander2011-08-16
| | | | | | | When streaming including WAL, the size estimate will always be incorrect, since we don't know how much WAL is included. To make sure the output doesn't look completely unreasonable, this patch increases the total size whenever we go past the estimate, to make sure we never go above 100%.
* Make pg_basebackup progress report translatablePeter Eisentraut2011-08-16
| | | | | Also fix a potential portability bug, because INT64_FORMAT is only guaranteed to be available with snprintf, not fprintf.
* Use less cryptic variable namesPeter Eisentraut2011-08-16
|
* Adjust regression tests for error message changePeter Eisentraut2011-08-15
|
* Add "Reason code" prefix to internal SSI error messagesPeter Eisentraut2011-08-15
| | | | | | | | | | This makes it clearer that the error message is perhaps not supposed to be understood by users, and it also makes it somewhat clearer that it was not accidentally omitted from translation. Idea from Heikki Linnakangas, except that we don't mark "Reason code" for translation at this point, because that would make the implementation too cumbersome.
* Fix unsafe order of operations in foreign-table DDL commands.Tom Lane2011-08-14
| | | | | | | | | | | | | | | | When updating or deleting a system catalog tuple, it's necessary to acquire RowExclusiveLock on the catalog before looking up the tuple; otherwise a concurrent VACUUM FULL on the catalog might move the tuple to a different TID before we can apply the update. Coding patterns that find the tuple via a table scan aren't at risk here, but when obtaining the tuple from a catalog cache, correct ordering is important; and several routines in foreigncmds.c got it wrong. Noted while running the regression tests in parallel with VACUUM FULL of assorted system catalogs. For consistency I moved all the heap_open calls to the starts of their functions, including a couple for which there was no actual bug. Back-patch to 8.4 where foreigncmds.c was added.
* Fix incorrect timeout handling during initial authentication transaction.Tom Lane2011-08-13
| | | | | | | | | | | | | | | | | | | | | | The statement start timestamp was not set before initiating the transaction that is used to look up client authentication information in pg_authid. In consequence, enable_sig_alarm computed a wrong value (far in the past) for statement_fin_time. That didn't have any immediate effect, because the timeout alarm was set without reference to statement_fin_time; but if we subsequently blocked on a lock for a short time, CheckStatementTimeout would consult the bogus value when we cancelled the lock timeout wait, and then conclude we'd timed out, leading to immediate failure of the connection attempt. Thus an innocent "vacuum full pg_authid" would cause failures of concurrent connection attempts. Noted while testing other, more serious consequences of vacuum full on system catalogs. We should set the statement timestamp before StartTransactionCommand(), so that the transaction start timestamp is also valid. I'm not sure if there are any non-cosmetic effects of it not being valid, but the xact timestamp is at least sent to the statistics machinery. Back-patch to 9.0. Before that, the client authentication timeout was done outside any transaction and did not depend on this state to be valid.
* Unbreak legacy syntax "COMMENT ON RULE x IS y", with no relation name.Robert Haas2011-08-11
| | | | | check_object_ownership() isn't happy about the null relation pointer. We could fix it there, but this seems more future-proof.