aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix lo_import and lo_export to return useful error messages more often.Tom Lane2012-10-08
| | | | | | | | | I found that these functions tend to return -1 while leaving an empty error message string in the PGconn, if they suffer some kind of I/O error on the file. The reason is that lo_close, which thinks it's executed a perfectly fine SQL command, clears the errorMessage. The minimum-change workaround is to reorder operations here so that we don't fill the errorMessage until after lo_close.
* Fix lo_export usage in example programs.Tom Lane2012-10-08
| | | | lo_export returns -1, not zero, on failure.
* Say ANALYZE, not VACUUM, in error message on analyze in hot standby.Heikki Linnakangas2012-10-08
| | | | Tomonaru Katsumata
* Fixed test for array boundary.Michael Meskes2012-10-05
| | | | | | Instead of continuing if the next character is not an array boundary get_data() used to continue only on finding a boundary so it was not able to read any element after the first.
* REASSIGN OWNED: consider grants on tablespaces, tooAlvaro Herrera2012-10-03
| | | | | | | | Apparently this was considered in the original code (see commit cec3b0a9) but I failed to notice that such entries would always be skipped by the database check at the start of the loop. Per bugs #7578 by Nikolay, #6116 by tushar.qa@gmail.com.
* Fix access past end of string in date parsing.Heikki Linnakangas2012-10-02
| | | | | | This affects date_in(), and a couple of other funcions that use DecodeDate(). Hitoshi Harada
* Fix bugs in "restore.sql" script emitted in pg_dump tar output.Tom Lane2012-09-29
| | | | | | | | | | | | | | | | | | | The tar output module did some very ugly and ultimately incorrect hacking on COPY commands to try to get them to work in the context of restoring a deconstructed tar archive. In particular, it would fail altogether for table names containing any upper-case characters, since it smashed the command string to lower-case before modifying it (and, just to add insult to injury, did that in a way that would fail in multibyte encodings). I don't see any particular value in being flexible about the case of the command keywords, since the string will just have been created by dumpTableData, so let's get rid of the whole case-folding thing. Also, it doesn't seem to meet the POLA for the script to restore data only in COPY mode, so add \i commands to make it have comparable behavior in --inserts mode. Noted while looking at the tar-output code in connection with Brian Weaver's patch.
* Fix tar files emitted by pg_basebackup to be POSIX conformant.Tom Lane2012-09-28
| | | | | | | | | | | | | Back-patch portions of commit 05b555d12bc2ad0d581f48a12b45174db41dc10d. There doesn't seem to be any reason not to fix pg_basebackup fully, but we can't change pg_dump's "magic" string without breaking older versions of pg_restore. Instead, just patch pg_restore to accept either version of the magic string, in hopes of avoiding compatibility problems when 9.3 comes out. I also fixed pg_dump to write the correct 2-block EOF marker, since that won't create a compatibility problem with pg_restore and it could help with some versions of tar. Brian Weaver and Tom Lane
* Stamp 9.1.6.REL9_1_6Tom Lane2012-09-19
|
* Update time zone data files to tzdata release 2012f.Tom Lane2012-09-19
| | | | DST law changes in Fiji.
* Translation updatesPeter Eisentraut2012-09-19
|
* Fix bufmgr so CHECKPOINT_END_OF_RECOVERY behaves as a shutdown checkpoint.Simon Riggs2012-09-16
| | | | | | | | | Recovery code documents clearly that a shutdown checkpoint is executed at end of recovery - a shutdown checkpoint WAL record is written but the buffer manager had been altered to treat end of recovery as a normal checkpoint. This bug exacerbates the bufmgr relpersistence bug. Bug spotted by Andres Freund, patch by me.
* Back-patch fix and test case for bug #7516.Tom Lane2012-09-14
| | | | | | | | | | | | | | Back-patch commits 9afc6481117d2dd936e752da0424a2b6b05f6459 and b8fbbcf37f22c5e8361da939ad0fc4be18a34ca9. The first of these is really a minor code cleanup to save a few cycles, but it turns out to provide a workaround for the misoptimization problem described in bug #7516. The second commit adds a regression test case. Back-patch the fix to all active branches. The test case only works as far back as 9.0, because it relies on plpgsql which isn't installed by default before that. (I didn't have success modifying it into an all-plperl form that still provoked a crash, though this may just reflect my lack of Perl-fu.)
* Properly set relpersistence for fake relcache entries.Robert Haas2012-09-14
| | | | | | | This can result in buffers failing to be properly flushed at checkpoint time, leading to data loss. Report, diagnosis, and patch by Jeff Davis.
* Fix logical errors in tsquery selectivity estimation for prefix queries.Tom Lane2012-09-11
| | | | | | | | | | | | | | | | | | | | | | I made multiple errors in commit 97532f7c29468010b87e40a04f8daa3eb097f654, stemming mostly from failure to think about the available frequency data as being element frequencies not value frequencies (so that occurrences of different elements are not mutually exclusive). This led to sillinesses such as estimating that "word" would match more rows than "word:*". The choice to clamp to a minimum estimate of DEFAULT_TS_MATCH_SEL also seems pretty ill-considered in hindsight, as it would frequently result in an estimate much larger than the available data suggests. We do need some sort of clamp, since a pattern not matching any of the MCELEMs probably still needs a selectivity estimate of more than zero. I chose instead to clamp to at least what a non-MCELEM word would be estimated as, preserving the property that "word:*" doesn't get an estimate less than plain "word", whether or not the word appears in MCELEM. Per investigation of a gripe from Bill Martin, though I suspect that his example case actually isn't even reaching the erroneous code. Back-patch to 9.1 where this code was introduced.
* Make plperl safe against functions that are redefined while running.Tom Lane2012-09-09
| | | | | | | | | | | | | | | | | | validate_plperl_function() supposed that it could free an old plperl_proc_desc struct immediately upon detecting that it was stale. However, if a plperl function is called recursively, this could result in deleting the struct out from under an outer invocation, leading to misbehavior or crashes. Add a simple reference-count mechanism to ensure that such structs are freed only when the last reference goes away. Per investigation of bug #7516 from Marko Tiikkaja. I am not certain that this error explains his report, because he says he didn't have any recursive calls --- but it's hard to see how else it could have crashed right there. In any case, this definitely fixes some problems in the area. Back-patch to all active branches.
* Use .NOTPARALLEL in ecpg/Makefile to avoid a gmake parallelism bug.Tom Lane2012-09-09
| | | | | | | | | | | | | | | Investigation shows that some intermittent build failures in ecpg are the result of a gmake bug that was reported quite some time ago: http://savannah.gnu.org/bugs/?30653 Preventing parallel builds of the ecpg subdirectories seems to dodge the bug. Per yesterday's pgsql-hackers discussion, there are some other things in the subdirectory makefiles that seem rather unsafe for parallel builds too, but there's little point in fixing them as long as we have to work around a make bug. Back-patch to 9.1; parallel builds weren't very well supported before that anyway.
* Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH.Tom Lane2012-09-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The planner previously assumed that parameter Vars having the same absolute query level, varno, and varattno could safely be assigned the same runtime PARAM_EXEC slot, even though they might be different Vars appearing in different subqueries. This was (probably) safe before the introduction of CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can be executed during execution of some other subquery, causing the lifespan of Params at the same syntactic nesting level as the CTE to overlap with use of the same slots inside the CTE. In 9.1 we created additional hazards by using the same parameter-assignment technology for nestloop inner scan parameters, but it was broken before that, as illustrated by the added regression test. To fix, restructure the planner's management of PlannerParamItems so that items having different semantic lifespans are kept rigorously separated. This will probably result in complex queries using more runtime PARAM_EXEC slots than before, but the slots are cheap enough that this hardly matters. Also, stop generating PlannerParamItems containing Params for subquery outputs: all we really need to do is reserve the PARAM_EXEC slot number, and that now only takes incrementing a counter. The planning code is simpler and probably faster than before, as well as being more correct. Per report from Vik Reykja. Back-patch of commit 46c508fbcf98ac334f1e831d21021d731c882fbb into all branches that support WITH.
* Fix "too many arguments" messages not to index off the end of argv[].Robert Haas2012-09-06
| | | | | This affects initdb, clusterdb, reindexdb, and vacuumdb in master and 9.2; in earlier branches, only initdb is affected.
* Fix inappropriate error messages for Hot Standby misconfiguration errors.Tom Lane2012-09-05
| | | | | | | | Give the correct name of the GUC parameter being complained of. Also, emit a more suitable SQLSTATE (INVALID_PARAMETER_VALUE, not the default INTERNAL_ERROR). Gurjeet Singh, errcode adjustment by me
* Restore SIGFPE handler after initializing PL/Perl.Tom Lane2012-09-05
| | | | | | | | | | | | Perl, for some unaccountable reason, believes it's a good idea to reset SIGFPE handling to SIG_IGN. Which wouldn't be a good idea even if it worked; but on some platforms (Linux at least) it doesn't work at all, instead resulting in forced process termination if the signal occurs. Given the lack of other complaints, it seems safe to assume that Perl never actually provokes SIGFPE and so there is no value in the setting anyway. Hence, reset it to our normal handler after initializing Perl. Report, analysis and patch by Andres Freund.
* Make configure probe for mbstowcs_l as well as wcstombs_l.Tom Lane2012-08-31
| | | | | | | | | | | We previously supposed that any given platform would supply both or neither of these functions, so that one configure test would be sufficient. It now appears that at least on AIX this is not the case ... which is likely an AIX bug, but nonetheless we need to cope with it. So use separate tests. Per bug #6758; thanks to Andrew Hastie for doing the followup testing needed to confirm what was happening. Backpatch to 9.1, where we began using these functions.
* Back-patch recent fixes for gistchoose and gistRelocateBuildBuffersOnSplit.Tom Lane2012-08-30
| | | | | | | | | | | | | | | | This back-ports commits c8ba697a4bdb934f0c51424c654e8db6133ea255 and e5db11c5582b469c04a11f217a0f32c827da5dd7, which fix one definite and one speculative bug in gistchoose, and make the code a lot more intelligible as well. In 9.2 only, this also affects the largely-copied-and-pasted logic in gistRelocateBuildBuffersOnSplit. The impact of the bugs was that the functions might make poor decisions as to which index tree branch to push a new entry down into, resulting in GiST index bloat and poor performance. The fixes rectify these decisions for future insertions, but a REINDEX would be needed to clean up any existing index bloat. Alexander Korotkov, Robert Haas, Tom Lane
* Add missing period to detail message.Robert Haas2012-08-30
| | | | Per note from Peter Eisentraut.
* Back-patch fixes for some issues in our Windows socket code into 9.1.Robert Haas2012-08-27
| | | | | | This is a backport of commit b85427f2276d02756b558c0024949305ea65aca5. Per discussion of bug #4958. Some of these fixes probably need to be back-patched further, but I'm just doing this much for now.
* Fix issues with checks for unsupported transaction states in Hot Standby.Tom Lane2012-08-24
| | | | | | | | | | | | | | | | | | | | | | | | | The GUC check hooks for transaction_read_only and transaction_isolation tried to check RecoveryInProgress(), so as to disallow setting read/write mode or serializable isolation level (respectively) in hot standby sessions. However, GUC check hooks can be called in many situations where we're not connected to shared memory at all, resulting in a crash in RecoveryInProgress(). Among other cases, this results in EXEC_BACKEND builds crashing during child process start if default_transaction_isolation is serializable, as reported by Heikki Linnakangas. Protect those calls by silently allowing any setting when not inside a transaction; which is okay anyway since these GUCs are always reset at start of transaction. Also, add a check to GetSerializableTransactionSnapshot() to complain if we are in hot standby. We need that check despite the one in check_XactIsoLevel() because default_transaction_isolation could be serializable. We don't want to complain any sooner than this in such cases, since that would prevent running transactions at all in such a state; but a transaction can be run, if SET TRANSACTION ISOLATION is done before setting a snapshot. Per report some months ago from Robert Haas. Back-patch to 9.1, since these problems were introduced by the SSI patch. Kevin Grittner and Tom Lane, with ideas from Heikki Linnakangas
* Fix cascading privilege revoke to notice when privileges are still held.Tom Lane2012-08-23
| | | | | | | | | | | | If we revoke a grant option from some role X, but X still holds the option via another grant, we should not recursively revoke the privilege from role(s) Y that X had granted it to. This was supposedly fixed as one aspect of commit 4b2dafcc0b1a579ef5daaa2728223006d1ff98e9, but I must not have tested it, because in fact that code never worked: it forgot to shift the grant-option bits back over when masking the bits being revoked. Per bug #6728 from Daniel German. Back-patch to all active branches, since this has been wrong since 8.0.
* Fix rescan logic in nodeCtescan.Tom Lane2012-08-15
| | | | | | | | | | | | | | | | | | | | | | The previous coding essentially assumed that nodes would be rescanned in the same order they were initialized in; or at least that the "leader" of a group of CTEscans would be rescanned before any others were required to execute. Unfortunately, that isn't even a little bit true. It's possible to devise queries in which the leader isn't rescanned until other CTEscans on the same CTE have run to completion, or even in which the leader never gets a rescan call at all. The fix makes the leader specially responsible only for initial creation and final destruction of the tuplestore; rescan resets are now a symmetrically shared responsibility. This means that we might reset the tuplestore multiple times when restarting a plan subtree containing multiple CTEscans; but resetting an already-empty tuplestore is cheap enough that that doesn't seem like a problem. Per report from Adam Mackler; the new regression test cases are based on his example query. Back-patch to 8.4 where CTE scans were introduced.
* Disallow extensions from owning the schema they are assigned to.Tom Lane2012-08-15
| | | | | | | | | | | This situation creates a dependency loop that confuses pg_dump and probably other things. Moreover, since the mental model is that the extension "contains" schemas it owns, but "is contained in" its extschema (even though neither is strictly true), having both true at once is confusing for people too. So prevent the situation from being set up. Reported and patched by Thom Brown. Back-patch to 9.1 where extensions were added.
* Stamp 9.1.5.REL9_1_5Tom Lane2012-08-14
|
* Prevent access to external files/URLs via XML entity references.Tom Lane2012-08-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | xml_parse() would attempt to fetch external files or URLs as needed to resolve DTD and entity references in an XML value, thus allowing unprivileged database users to attempt to fetch data with the privileges of the database server. While the external data wouldn't get returned directly to the user, portions of it could be exposed in error messages if the data didn't parse as valid XML; and in any case the mere ability to check existence of a file might be useful to an attacker. The ideal solution to this would still allow fetching of references that are listed in the host system's XML catalogs, so that documents can be validated according to installed DTDs. However, doing that with the available libxml2 APIs appears complex and error-prone, so we're not going to risk it in a security patch that necessarily hasn't gotten wide review. So this patch merely shuts off all access, causing any external fetch to silently expand to an empty string. A future patch may improve this. In HEAD and 9.2, also suppress warnings about undefined entities, which would otherwise occur as a result of not loading referenced DTDs. Previous branches don't show such warnings anyway, due to different error handling arrangements. Credit to Noah Misch for first reporting the problem, and for much work towards a solution, though this simplistic approach was not his preference. Also thanks to Daniel Veillard for consultation. Security: CVE-2012-3489
* Translation updatesPeter Eisentraut2012-08-14
|
* Update time zone data files to tzdata release 2012e.Tom Lane2012-08-14
| | | | | | | | | | DST law changes in Morocco; Tokelau has relocated to the other side of the International Date Line; and apparently Olson had Tokelau's GMT offset wrong by an hour even before that. There are also a large number of non-significant changes in this update. Upstream took the opportunity to remove trailing whitespace, and the SCCS-style version numbers on the individual files are gone too.
* Fix dependencies generated during ALTER TABLE ADD CONSTRAINT USING INDEX.Tom Lane2012-08-11
| | | | | | | | | | | | | | | | | | This command generated new pg_depend entries linking the index to the constraint and the constraint to the table, which match the entries made when a unique or primary key constraint is built de novo. However, it did not bother to get rid of the entries linking the index directly to the table. We had considered the issue when the ADD CONSTRAINT USING INDEX patch was written, and concluded that we didn't need to get rid of the extra entries. But this is wrong: ALTER COLUMN TYPE wasn't expecting such redundant dependencies to exist, as reported by Hubert Depesz Lubaczewski. On reflection it seems rather likely to break other things as well, since there are many bits of code that crawl pg_depend for one purpose or another, and most of them are pretty naive about what relationships they're expecting to find. Fortunately it's not that hard to get rid of the extra dependency entries, so let's do that. Back-patch to 9.1, where ALTER TABLE ADD CONSTRAINT USING INDEX was added.
* Fix upper limit of superuser_reserved_connections, add limit for wal_sendersMagnus Hagander2012-08-10
| | | | | | | | Should be limited to the maximum number of connections excluding autovacuum workers, not including. Add similar check for max_wal_senders, which should never be higher than max_connections.
* Update isolation tests' README file.Tom Lane2012-08-08
| | | | | The directions explaining about running the prepared-transactions test were not updated in commit ae55d9fbe3871a5e6309d9b91629f1b0ff2b8cba.
* fsync backup_label after pg_start_backup()Simon Riggs2012-08-07
| | | | Dave Kerr, backpatched by Simon Riggs
* Perform conversion from Python unicode to string/bytes object via UTF-8.Heikki Linnakangas2012-08-06
| | | | | | | | | | | | | | | | | | | | | | | | We used to convert the unicode object directly to a string in the server encoding by calling Python's PyUnicode_AsEncodedString function. In other words, we used Python's routines to do the encoding. However, that has a few problems. First of all, it required keeping a mapping table of Python encoding names and PostgreSQL encodings. But the real killer was that Python doesn't support EUC_TW and MULE_INTERNAL encodings at all. Instead, convert the Python unicode object to UTF-8, and use PostgreSQL's encoding conversion functions to convert from UTF-8 to server encoding. We were already doing the same in the other direction in PLyUnicode_FromString, so this is more consistent, too. Note: This makes SQL_ASCII to behave more leniently. We used to map SQL_ASCII to Python's 'ascii', which on Python means strict 7-bit ASCII only, so you got an error if the python string contained anything but pure ASCII. You no longer get an error; you get the UTF-8 representation of the string instead. Backpatch to 9.0, where these conversions were introduced. Jan UrbaƄski
* Fix bugs with parsing signed hh:mm and hh:mm:ss fields in interval input.Tom Lane2012-08-03
| | | | | | | | | | | | | | | | | | | | | | | DecodeInterval() failed to honor the "range" parameter (the special SQL syntax for indicating which fields appear in the literal string) if the time was signed. This seems inappropriate, so make it work like the not-signed case. The inconsistency was introduced in my commit f867339c0148381eb1d01f93ab5c79f9d10211de, which as noted in its log message was only really focused on making SQL-compliant literals work per spec. Including a sign here is not per spec, but if we're going to allow it then it's reasonable to expect it to work like the not-signed case. Also, remove bogus setting of tmask, which caused subsequent processing to think that what had been given was a timezone and not an hh:mm(:ss) field, thus confusing checks for redundant fields. This seems to be an aboriginal mistake in Lockhart's commit 2cf1642461536d0d8f3a1cf124ead0eac04eb760. Add regression test cases to illustrate the changed behaviors. Back-patch as far as 8.4, where support for spec-compliant interval literals was added. Range problem reported and diagnosed by Amit Kapila, tmask problem by me.
* Fix WITH attached to a nested set operation (UNION/INTERSECT/EXCEPT).Tom Lane2012-07-31
| | | | | | | | | | | | | Parse analysis neglected to cover the case of a WITH clause attached to an intermediate-level set operation; it only handled WITH at the top level or WITH attached to a leaf-level SELECT. Per report from Adam Mackler. In HEAD, I rearranged the order of SelectStmt's fields to put withClause with the other fields that can appear on non-leaf SelectStmts. In back branches, leave it alone to avoid a possible ABI break for third-party code. Back-patch to 8.4 where WITH support was added.
* Fix syslogger so that log_truncate_on_rotation works in the first rotation.Tom Lane2012-07-31
| | | | | | | | | | | | | | | In the original coding of the log rotation stuff, we did not bother to make the truncation logic work for the very first rotation after postmaster start (or after a syslogger crash and restart). It just always appended in that case. It did not seem terribly important at the time, but we've recently had two separate complaints from people who expected it to work unsurprisingly. (Both users tend to restart the postmaster about as often as a log rotation is configured to happen, which is maybe not typical use, but still...) Since the initial log file is opened in the postmaster, fixing this requires passing down some more state to the syslogger child process. It's always been like this, so back-patch to all supported branches.
* Improve reporting of error situations in find_other_exec().Tom Lane2012-07-27
| | | | | | | | | | | This function suppressed any stderr output from the called program, which is unnecessary in the normal case and unhelpful in error cases. It also gave a rather opaque message along the lines of "fgets failure: Success" in case the called program failed to return anything on stdout. Since we've seen multiple reports of people not understanding what's wrong when pg_ctl reports this, improve the message. Back-patch to all active branches.
* Only allow autovacuum to be auto-canceled by a directly blocked process.Tom Lane2012-07-26
| | | | | | | | | | | | | | | | | | | In the original coding of the autovacuum cancel feature, commit acac68b2bcae818bc8803b8cb8cbb17eee8d5e2b, an autovacuum process was considered a target for cancellation if it was found to hard-block any process examined in the deadlock search. This patch tightens the test so that the autovacuum must directly hard-block the current process. This should make the behavior more predictable in general, and in particular it ensures that an autovacuum will not be canceled with less than deadlock_timeout grace period. In the old coding, it was possible for an autovacuum to be canceled almost instantly, given unfortunate timing of two or more other processes' lock attempts. This also justifies the logging methodology in the recent commit d7318d43d891bd63e82dcfc27948113ed7b1db80; without this restriction, that patch isn't providing enough information to see the connection of the canceling process to the autovacuum. Like that one, patch all the way back.
* Log a better message when canceling autovacuum.Robert Haas2012-07-26
| | | | | | | | | | The old message was at DEBUG2, so typically it didn't show up in the log at all. As a result, in most cases where autovacuum was canceled, the only information that was logged was the table being vacuumed, with no indication as to what problem caused the cancel. Crank up the level to LOG and add some more details to assist with debugging. Back-patch all the way, per discussion on pgsql-hackers.
* Fix longstanding crash-safety bug with newly-created-or-reset sequences.Tom Lane2012-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a crash occurred immediately after the first nextval() call for a serial column, WAL replay would restore the sequence to a state in which it appeared that no nextval() had been done, thus allowing the first sequence value to be returned again by the next nextval() call; as reported in bug #6748 from Xiangming Mei. More generally, the problem would occur if an ALTER SEQUENCE was executed on a freshly created or reset sequence. (The manifestation with serial columns was introduced in 8.2 when we added an ALTER SEQUENCE OWNED BY step to serial column creation.) The cause is that sequence creation attempted to save one WAL entry by writing out a WAL record that made it appear that the first nextval() had already happened (viz, with is_called = true), while marking the sequence's in-database state with log_cnt = 1 to show that the first nextval() need not emit a WAL record. However, ALTER SEQUENCE would emit a new WAL entry reflecting the actual in-database state (with is_called = false). Then, nextval would allocate the first sequence value and set is_called = true, but it would trust the log_cnt value and not emit any WAL record. A crash at this point would thus restore the sequence to its post-ALTER state, causing the next nextval() call to return the first sequence value again. To fix, get rid of the idea of logging an is_called status different from reality. This means that the first nextval-driven WAL record will happen at the first nextval call not the second, but the marginal cost of that is pretty negligible. In addition, make sure that ALTER SEQUENCE resets log_cnt to zero in any case where it touches sequence parameters that affect future nextval results. This will result in some user-visible changes in the contents of a sequence's log_cnt column, as reflected in the patch's regression test changes; but no application should be depending on that anyway, since it was already true that log_cnt changes rather unpredictably depending on checkpoint timing. In addition, make some basically-cosmetic improvements to get rid of sequence.c's undesirable intimacy with page layout details. It was always really trying to WAL-log the contents of the sequence tuple, so we should have it do that directly using a HeapTuple's t_data and t_len, rather than backing into it with some magic assumptions about where the tuple would be on the sequence's page. Back-patch to all supported branches.
* Remove now unneeded results file for disabled prepared transactions case.Andrew Dunstan2012-07-20
|
* Remove prepared transactions from main isolation test schedule.Andrew Dunstan2012-07-20
| | | | | | | | There is no point in running this test when prepared transactions are disabled, which is the default. New make targets that include the test are provided. This will save some useless waste of cycles on buildfarm machines. Backpatch to 9.1 where these tests were introduced.
* Fix whole-row Var evaluation to cope with resjunk columns (again).Tom Lane2012-07-20
| | | | | | | | | | | | | | | | | | When a whole-row Var is reading the result of a subquery, we need it to ignore any "resjunk" columns that the subquery might have evaluated for GROUP BY or ORDER BY purposes. We've hacked this area before, in commit 68e40998d058c1f6662800a648ff1e1ce5d99cba, but that fix only covered whole-row Vars of named composite types, not those of RECORD type; and it was mighty klugy anyway, since it just assumed without checking that any extra columns in the result must be resjunk. A proper fix requires getting hold of the subquery's targetlist so we can actually see which columns are resjunk (whereupon we can use a JunkFilter to get rid of them). So bite the bullet and add some infrastructure to make that possible. Per report from Andrew Dunstan and additional testing by Merlin Moncure. Back-patch to all supported branches. In 8.3, also back-patch commit 292176a118da6979e5d368a4baf27f26896c99a5, which for some reason I had not done at the time, but it's a prerequisite for this change.
* Improve coding around the fsync request queue.Tom Lane2012-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In all branches back to 8.3, this patch fixes a questionable assumption in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue that there are no uninitialized pad bytes in the request queue structs. This would only cause trouble if (a) there were such pad bytes, which could happen in 8.4 and up if the compiler makes enum ForkNumber narrower than 32 bits, but otherwise would require not-currently-planned changes in the widths of other typedefs; and (b) the kernel has not uniformly initialized the contents of shared memory to zeroes. Still, it seems a tad risky, and we can easily remove any risk by pre-zeroing the request array for ourselves. In addition to that, we need to establish a coding rule that struct RelFileNode can't contain any padding bytes, since such structs are copied into the request array verbatim. (There are other places that are assuming this anyway, it turns out.) In 9.1 and up, the risk was a bit larger because we were also effectively assuming that struct RelFileNodeBackend contained no pad bytes, and with fields of different types in there, that would be much easier to break. However, there is no good reason to ever transmit fsync or delete requests for temp files to the bgwriter/checkpointer, so we can revert the request structs to plain RelFileNode, getting rid of the padding risk and saving some marginal number of bytes and cycles in fsync queue manipulation while we are at it. The savings might be more than marginal during deletion of a temp relation, because the old code transmitted an entirely useless but nonetheless expensive-to-process ForgetRelationFsync request to the background process, and also had the background process perform the file deletion even though that can safely be done immediately. In addition, make some cleanup of nearby comments and small improvements to the code in CompactCheckpointerRequestQueue/CompactBgwriterRequestQueue.
* Remove recently added PL/Perl encoding testsAlvaro Herrera2012-07-17
| | | | | | | | | | These only pass cleanly on UTF8 and SQL_ASCII encodings, besides the Japanese encoding in which they were originally written, which is clearly not good enough. Since the functionality they test has not ever been tested from PL/Perl, the best answer seems to be to remove the new tests completely. Per buildfarm results and ensuing discussion.