aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Update URL for pgtclng project.Tom Lane2012-04-06
| | | | Thom Brown
* Fix misleading output from gin_desc().Tom Lane2012-04-06
| | | | | | | | | | | XLOG_GIN_UPDATE_META_PAGE and XLOG_GIN_DELETE_LISTPAGE records were printed with a list link field labeled as "blkno", which was confusing, especially when the link was empty (InvalidBlockNumber). Print the metapage block number instead, since that's what's actually being updated. We could include the link values too as a separate field, but not clear it's worth the trouble. Back-patch to 8.4 where the dubious code was added.
* Fix syslogger to not lose log coherency under high load.Tom Lane2012-04-04
| | | | | | | | | The original coding of the syslogger had an arbitrary limit of 20 large messages concurrently in progress, after which it would just punt and dump message fragments to the output file separately. Our ambitions are a bit higher than that now, so allow the data structure to expand as necessary. Reported and patched by Andrew Dunstan; some editing by Tom
* Fix a couple of contrib/dblink bugs.Tom Lane2012-04-03
| | | | | | | | | | | | | | | dblink_exec leaked temporary database connections if any error occurred after connection setup, for example SELECT dblink_exec('...connect string...', 'select 1/0'); Add a PG_TRY block to ensure PQfinish gets done when it is needed. (dblink_record_internal is on the hairy edge of needing similar treatment, but seems not to be actively broken at the moment.) Also, in 9.0 and up, only one of the three functions using tuplestore return mode was properly checking that the query context would allow a tuplestore result. Noted while reviewing dblink patch. Back-patch to all supported branches.
* Fix O(N^2) behavior in pg_dump when many objects are in dependency loops.Tom Lane2012-03-31
| | | | | | | | | | | | | | Combining the loop workspace with the record of already-processed objects might have been a cute trick, but it behaves horridly if there are many dependency loops to repair: the time spent in the first step of findLoop() grows as O(N^2). Instead use a separate flag array indexed by dump ID, which we can check in constant time. The length of the workspace array is now never more than the actual length of a dependency chain, which should be reasonably short in all cases of practical interest. The code is noticeably easier to understand this way, too. Per gripe from Mike Roest. Since this is a longstanding performance bug, backpatch to all supported versions.
* Fix O(N^2) behavior in pg_dump for large numbers of owned sequences.Tom Lane2012-03-31
| | | | | | | | | | | | The loop that matched owned sequences to their owning tables required time proportional to number of owned sequences times number of tables; although this work was only expended in selective-dump situations, which is probably why the issue wasn't recognized long since. Refactor slightly so that we can perform this work after the index array for findTableByOid has been set up, reducing the time to O(M log N). Per gripe from Mike Roest. Since this is a longstanding performance bug, backpatch to all supported versions.
* Fix dblink's failure to report correct connection name in error messages.Tom Lane2012-03-29
| | | | | | | | | | | | | | The DBLINK_GET_CONN and DBLINK_GET_NAMED_CONN macros did not set the surrounding function's conname variable, causing errors to be incorrectly reported as having occurred on the "unnamed" connection in some cases. This bug was actually visible in two cases in the regression tests, but apparently whoever added those cases wasn't paying attention. Noted by Kyotaro Horiguchi, though this is different from his proposed patch. Back-patch to 8.4; 8.3 does not have the same type of error reporting so the patch is not relevant.
* Correct epoch of txid_current() when executed on a Hot Standby server.Simon Riggs2012-03-29
| | | | | | | | | Initialise ckptXidEpoch from starting checkpoint and maintain the correct value as we roll forwards. This allows GetNextXidAndEpoch() to return the correct epoch when executed during recovery. Backpatch to 9.0 when the problem is first observable by a user. Bug report from Daniel Farina
* pg_basebackup: Error handling fixes.Robert Haas2012-03-28
| | | | Thomas Ogrisegg and Fujii Masao
* Fix COPY FROM for null marker strings that correspond to invalid encoding.Tom Lane2012-03-25
| | | | | | | | | | | | | The COPY documentation says "COPY FROM matches the input against the null string before removing backslashes". It is therefore reasonable to presume that null markers like E'\\0' will work ... and they did, until someone put the tests in the wrong order during microoptimization-driven rewrites. Since then, we've been failing if the null marker is something that would de-escape to an invalidly-encoded string. Since null markers generally need to be something that can't appear in the data, this represents a nontrivial loss of functionality; surprising nobody noticed it earlier. Per report from Jeff Davis. Backpatch to 8.4 where this got broken.
* Fix planner's handling of outer PlaceHolderVars within subqueries.Tom Lane2012-03-24
| | | | | | | | | | | | | | | | | | | | | | | For some reason, in the original coding of the PlaceHolderVar mechanism I had supposed that PlaceHolderVars couldn't propagate into subqueries. That is of course entirely possible. When it happens, we need to treat an outer-level PlaceHolderVar much like an outer Var or Aggref, that is SS_replace_correlation_vars() needs to replace the PlaceHolderVar with a Param, and then when building the finished SubPlan we have to provide the PlaceHolderVar expression as an actual parameter for the SubPlan. The handling of the contained expression is a bit delicate but it can be treated exactly like an Aggref's expression. In addition to the missing logic in subselect.c, prepjointree.c was failing to search subqueries for PlaceHolderVars that need their relids adjusted during subquery pullup. It looks like everyplace else that touches PlaceHolderVars got it right, though. Per report from Mark Murawski. In 9.1 and HEAD, queries affected by this oversight would fail with "ERROR: Upper-level PlaceHolderVar found where not expected". But in 9.0 and 8.4, you'd silently get possibly-wrong answers, since the value transmitted into the subquery wouldn't go to null when it should.
* Cast some printf arguments to avoid possibly-nonportable behavior.Tom Lane2012-03-23
| | | | Per compiler warnings on buildfarm member black_firefly.
* Update docs on numeric storage requirements.Robert Haas2012-03-23
| | | | | | Since 9.1, the minimum overhead is three bytes, not five. Fujii Masao
* Fix GET DIAGNOSTICS for case of assignment to function's first variable.Tom Lane2012-03-22
| | | | | | | | | | | An incorrect and entirely unnecessary "safety check" in exec_stmt_getdiag() caused the code to treat an assignment to a variable with dno zero as a no-op. Unfortunately, that's a perfectly valid dno. This has been broken since GET DIAGNOSTICS was invented. It's not terribly surprising that the bug went unnoticed for so long, since in most cases you probably wouldn't use the function's first-created variable (normally its first parameter) as a GET DIAGNOSTICS target. Nonetheless, it's broken. Per bug #6551 from Adam Buraczewski.
* Back-patch contrib/vacuumlo's new -l (limit) option into 9.0 and 9.1.Tom Lane2012-03-21
| | | | | | | | | | | | | | | | | | | Since 9.0, removing lots of large objects in a single transaction risks exceeding max_locks_per_transaction, because we merged large object removal into the generic object-drop mechanism, which takes out an exclusive lock on each object to be dropped. This creates a hazard for contrib/vacuumlo, which has historically tried to drop all unreferenced large objects in one transaction. There doesn't seem to be any correctness requirement to do it that way, though; we only need to drop enough large objects per transaction to amortize the commit costs. To prevent a regression from pre-9.0 releases wherein vacuumlo worked just fine, back-patch commits b69f2e36402aaa222ed03c1769b3de6d5be5f302 and 64c604898e812aa93c124c666e8709fff1b8dd26, which break vacuumlo's deletions into multiple transactions with a user-controllable upper limit on the number of objects dropped per transaction. Tim Lewis, Robert Haas, Tom Lane
* Don't allow CREATE TABLE AS to put relations in pg_global.Robert Haas2012-03-21
| | | | | | | | | | | | | This was never intended to be allowed, and is blocked for an ordinary CREATE TABLE, but CREATE TABLE AS slipped through the cracks. This commit won't do anything to fix existing cases where this has loophole has been exploited, but it still seems prudent to lock it down going forward. Back-branch commit only, as this problem has been refactored away on the master branch. Andres Freund
* Fix bug where walsender goes into a busy loop if connection is terminated.Heikki Linnakangas2012-03-21
| | | | | | | | | | | The problem was that ResetLatch was not being called in the walsender loop if the connection was terminated, so WaitLatch never sleeps until the terminated connection is detected. In the master-branch, this was already fixed as a side-effect of some refactoring of the loop. This commit backports that refactoring to 9.1. 9.0 does not have this bug, because we didn't use latches back then. Fujii Masao
* Update struct Trigger in docsAlvaro Herrera2012-03-20
|
* plperl: Package-qualify _TDAlvaro Herrera2012-03-19
| | | | | | | | | | Failing to do so causes trigger invocation to fail when they are nested within a function invocation that changes the current package. Backpatch to 9.1; previous releases used a different method to obtain _TD. Per bug report from Mark Murawski (bug #6511) Author: Alex Hunsaker
* In pg_upgrade, remove dependency on pg_config, as that might not be inBruce Momjian2012-03-19
| | | | | | | | | | the non-development install. Instead, use the LOAD mechanism to check for the pg_upgrade_support shared object, like we do for other shared object checks. Backpatch to 9.1. Report from Àlvaro
* Honor inputdir and outputdir when converting regression files.Andrew Dunstan2012-03-17
| | | | | | | | | When converting source files, pg_regress' inputdir and outputdir options were ignored when computing the locations of the destination files. In consequence, these options were effectively unusable when the regression inputs need to be adjusted by pg_regress. This patch makes pg_regress put the converted files in the same place that these options specify non-converted input or results files are to be found. Backpatched to all live branches.
* pg_restore: Fix memory and file descriptor leak with directory formatPeter Eisentraut2012-03-16
| | | | found by Coverity
* Revisit handling of UNION ALL subqueries with non-Var output columns.Tom Lane2012-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit 57664ed25e5dea117158a2e663c29e60b3546e1c I tried to fix a bug reported by Teodor Sigaev by making non-simple-Var output columns distinct (by wrapping their expressions with dummy PlaceHolderVar nodes). This did not work too well. Commit b28ffd0fcc583c1811e5295279e7d4366c3cae6c fixed some ensuing problems with matching to child indexes, but per a recent report from Claus Stadler, constraint exclusion of UNION ALL subqueries was still broken, because constant-simplification didn't handle the injected PlaceHolderVars well either. On reflection, the original patch was quite misguided: there is no reason to expect that EquivalenceClass child members will be distinct. So instead of trying to make them so, we should ensure that we can cope with the situation when they're not. Accordingly, this patch reverts the code changes in the above-mentioned commits (though the regression test cases they added stay). Instead, I've added assorted defenses to make sure that duplicate EC child members don't cause any problems. Teodor's original problem ("MergeAppend child's targetlist doesn't match MergeAppend") is addressed more directly by revising prepare_sort_from_pathkeys to let the parent MergeAppend's sort list guide creation of each child's sort list. In passing, get rid of add_sort_column; as far as I can tell, testing for duplicate sort keys at this stage is dead code. Certainly it doesn't trigger often enough to be worth expending cycles on in ordinary queries. And keeping the test would've greatly complicated the new logic in prepare_sort_from_pathkeys, because comparing pathkey list entries against a previous output array requires that we not skip any entries in the list. Back-patch to 9.1, like the previous patches. The only known issue in this area that wasn't caused by the ill-advised previous patches was the MergeAppend planning failure, which of course is not relevant before 9.1. It's possible that we need some of the new defenses against duplicate child EC entries in older branches, but until there's some clear evidence of that I'm going to refrain from back-patching further.
* Patch some corner-case bugs in pl/python.Tom Lane2012-03-13
| | | | | | | | | | | | Dave Malcolm of Red Hat is working on a static code analysis tool for Python-related C code. It reported a number of problems in plpython, most of which were failures to check for NULL results from object-creation functions, so would only be an issue in very-low-memory situations. Patch in HEAD and 9.1. We could go further back but it's not clear that these issues are important enough to justify the work. Jan Urbański
* Remove tabs in SGML filesBruce Momjian2012-03-12
|
* Add description for --no-locale and --text-search-config.Tatsuo Ishii2012-03-11
|
* ecpg: Fix off-by-one error in memory copyingPeter Eisentraut2012-03-11
| | | | | | | In a rare case, one byte past the end of memory belonging to the sqlca_t structure would be written to. found by Coverity
* ecpg: Fix rare memory leaksPeter Eisentraut2012-03-11
| | | | found by Coverity
* psql: Fix invalid memory accessPeter Eisentraut2012-03-11
| | | | | | | | Due to an apparent thinko, when printing a table in expanded mode (\x), space would be allocated for 1 slot plus 1 byte per line, instead of 1 slot per line plus 1 slot for the NULL terminator. When the line count is small, reading or writing the terminator would therefore access memory beyond what was allocated.
* libpq: Fix memory leakPeter Eisentraut2012-03-11
| | | | | | If a client encoding is specified as a connection parameter (or environment variable), internal storage allocated for it would never be freed.
* Fix some issues with temp/transient tables in extension scripts.Tom Lane2012-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Phil Sorber reported that a rewriting ALTER TABLE within an extension update script failed, because it creates and then drops a placeholder table; the drop was being disallowed because the table was marked as an extension member. We could hack that specific case but it seems likely that there might be related cases now or in the future, so the most practical solution seems to be to create an exception to the general rule that extension member objects can only be dropped by dropping the owning extension. To wit: if the DROP is issued within the extension's own creation or update scripts, we'll allow it, implicitly performing an "ALTER EXTENSION DROP object" first. This will simplify cases such as extension downgrade scripts anyway. No docs change since we don't seem to have documented the idea that you would need ALTER EXTENSION DROP for such an action to begin with. Also, arrange for explicitly temporary tables to not get linked as extension members in the first place, and the same for the magic pg_temp_nnn schemas that are created to hold them. This prevents assorted unpleasant results if an extension script creates a temp table: the forced drop at session end would either fail or remove the entire extension, and neither of those outcomes is desirable. Note that this doesn't fix the ALTER TABLE scenario, since the placeholder table is not temp (unless the table being rewritten is). Back-patch to 9.1.
* Attempt to reduce locale dependencies in regression tests.Robert Haas2012-03-07
| | | | | | | Commit 3e9a2672d25aed15ae6b4a09decbd8927d069868 fixed this for master, but REL9_1_STABLE also needs fixing; this is a back-branch commit only. Tomas Vondra
* Improve documentation around logging_collector and use of stderr.Tom Lane2012-03-05
| | | | | | | | | | | | | In backup.sgml, point out that you need to be using the logging collector if you want to log messages from a failing archive_command script. (This is an oversimplification, in that it will work without the collector as long as you're not sending postmaster stderr to /dev/null; but it seems like a good idea to encourage use of the collector to avoid problems with multiple processes concurrently scribbling on one file.) In config.sgml, do some wordsmithing of logging_collector discussion. Per bug #6518 from Janning Vygen
* Allow child-relation entries to be made in ec_has_const EquivalenceClasses.Tom Lane2012-03-02
| | | | | | | | | | | | | | | | | | | | | | This fixes an oversight in commit 11cad29c91524aac1d0b61e0ea0357398ab79bf8, which introduced MergeAppend plans. Before that happened, we never particularly cared about the sort ordering of scans of inheritance child relations, since appending their outputs together would destroy any ordering anyway. But now it's important to be able to match child relation sort orderings to those of the surrounding query. The original coding of add_child_rel_equivalences skipped ec_has_const EquivalenceClasses, on the originally-correct grounds that adding child expressions to them was useless. The effect of this is that when a parent variable is equated to a constant, we can't recognize that index columns on the equivalent child variables are not sort-significant; that is, we can't recognize that a child index on, say, (x, y) is able to generate output in "ORDER BY y" order when there is a clause "WHERE x = constant". Adding child expressions to the (x, constant) EquivalenceClass fixes this, without any downside that I can see other than a few more planner cycles expended on such queries. Per recent gripe from Robert McGehee. Back-patch to 9.1 where MergeAppend was introduced.
* Simplify references to backslash-doubling in func.sgml.Tom Lane2012-02-29
| | | | | | | | | | Several places were still written as though standard_conforming_strings didn't exist, much less be the default. Now that it is on by default, we can simplify the text and just insert occasional notes suggesting that you might have to think harder if it's turned off. Per discussion of a suggestion from Hannes Frederic Sowa. Back-patch to 9.1 where standard_conforming_strings was made the default.
* Correctly detect SSI conflicts of prepared transactions after crash.Heikki Linnakangas2012-02-29
| | | | | | | | | | | | A prepared transaction can get new conflicts in and out after preparing, so we cannot rely on the in- and out-flags stored in the statefile at prepare- time. As a quick fix, make the conservative assumption that after a restart, all prepared transactions are considered to have both in- and out-conflicts. That can lead to unnecessary rollbacks after a crash, but that shouldn't be a big problem in practice; you don't want prepared transactions to hang around for a long time anyway. Dan Ports
* Fix some more bugs in GIN's WAL replay logic.Tom Lane2012-02-26
| | | | | | | | | | | | | | | | | | In commit 4016bdef8aded77b4903c457050622a5a1815c16 I fixed a bunch of ginxlog.c bugs having to do with not handling XLogReadBuffer failures correctly. However, in ginRedoUpdateMetapage and ginRedoDeleteListPages, I unaccountably thought that failure to read the metapage would be impossible and just put in an elog(PANIC) call. This is of course wrong: failure is exactly what will happen if the index got dropped (or rebuilt) between creation of the WAL record and the crash we're trying to recover from. I believe this explains Nicholas Wilson's recent report of these errors getting reached. Also, fix memory leak in forgetIncompleteSplit. This wasn't of much concern when the code was written, but in a long-running standby server page split records could be expected to accumulate indefinitely. Back-patch to 8.4 --- before that, GIN didn't have a metapage.
* Stamp 9.1.3.REL9_1_3Tom Lane2012-02-23
|
* Last-minute release note updates.Tom Lane2012-02-23
| | | | Security: CVE-2012-0866, CVE-2012-0867, CVE-2012-0868
* Convert newlines to spaces in names written in pg_dump comments.Tom Lane2012-02-23
| | | | | | | | | | | | pg_dump was incautious about sanitizing object names that are emitted within SQL comments in its output script. A name containing a newline would at least render the script syntactically incorrect. Maliciously crafted object names could present a SQL injection risk when the script is reloaded. Reported by Heikki Linnakangas, patch by Robert Haas Security: CVE-2012-0868
* Remove arbitrary limitation on length of common name in SSL certificates.Tom Lane2012-02-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both libpq and the backend would truncate a common name extracted from a certificate at 32 bytes. Replace that fixed-size buffer with dynamically allocated string so that there is no hard limit. While at it, remove the code for extracting peer_dn, which we weren't using for anything; and don't bother to store peer_cn longer than we need it in libpq. This limit was not so terribly unreasonable when the code was written, because we weren't using the result for anything critical, just logging it. But now that there are options for checking the common name against the server host name (in libpq) or using it as the user's name (in the server), this could result in undesirable failures. In the worst case it even seems possible to spoof a server name or user name, if the correct name is exactly 32 bytes and the attacker can persuade a trusted CA to issue a certificate in which that string is a prefix of the certificate's common name. (To exploit this for a server name, he'd also have to send the connection astray via phony DNS data or some such.) The case that this is a realistic security threat is a bit thin, but nonetheless we'll treat it as one. Back-patch to 8.4. Older releases contain the faulty code, but it's not a security problem because the common name wasn't used for anything interesting. Reported and patched by Heikki Linnakangas Security: CVE-2012-0867
* Require execute permission on the trigger function for CREATE TRIGGER.Tom Lane2012-02-23
| | | | | | | | | | | | | | | | | This check was overlooked when we added function execute permissions to the system years ago. For an ordinary trigger function it's not a big deal, since trigger functions execute with the permissions of the table owner, so they couldn't do anything the user issuing the CREATE TRIGGER couldn't have done anyway. However, if a trigger function is SECURITY DEFINER, that is not the case. The lack of checking would allow another user to install it on his own table and then invoke it with, essentially, forged input data; which the trigger function is unlikely to realize, so it might do something undesirable, for instance insert false entries in an audit log table. Reported by Dinesh Kumar, patch by Robert Haas Security: CVE-2012-0866
* Allow MinGW builds to use standardly-named OpenSSL libraries.Tom Lane2012-02-23
| | | | | | | | In the Fedora variant of MinGW, the openssl libraries have their normal names, not libeay32 and libssleay32. Adjust configure probes to allow that, per bug #6486. Tomasz Ostrowski
* Translation updatesPeter Eisentraut2012-02-23
|
* Remove inappropriate quotesPeter Eisentraut2012-02-23
| | | | And adjust wording for consistency.
* Draft release notes for 9.1.3, 9.0.7, 8.4.11, 8.3.18.Tom Lane2012-02-22
|
* REASSIGN OWNED: Support foreign data wrappers and serversAlvaro Herrera2012-02-22
| | | | | | | This was overlooked when implementing those kinds of objects, in commit cae565e503c42a0942ca1771665243b4453c5770. Per report from Pawel Casperek.
* Correctly initialise shared recoveryLastRecPtr in recovery.Simon Riggs2012-02-22
| | | | | | | | Previously we used ReadRecPtr rather than EndRecPtr, which was not a serious error but caused pg_stat_replication to report incorrect replay_location until at least one WAL record is replayed. Fujii Masao
* Don't clear btpo_cycleid during _bt_vacuum_one_page.Tom Lane2012-02-21
| | | | | | | | | | | | | | When "vacuuming" a single btree page by removing LP_DEAD tuples, we are not actually within a vacuum operation, but rather in an ordinary insertion process that could well be running concurrently with a vacuum. So clearing the cycleid is incorrect, and could cause the concurrent vacuum to miss removing tuples that it needs to remove. This is a longstanding bug introduced by commit e6284649b9e30372b3990107a082bc7520325676 of 2006-07-25. I believe it explains Maxim Boguk's recent report of index corruption, and probably some other previously unexplained reports. In 9.0 and up this is a one-line fix; before that we need to introduce a flag to tell _bt_delitems what to do.
* Avoid double close of file handle in syslogger on win32Magnus Hagander2012-02-21
| | | | | | | This causes an exception when running under a debugger or in particular when running on a debug version of Windows. Patch from MauMau