aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Prevent pushing down WHERE clauses into unsafe UNION/INTERSECT nests.Tom Lane2013-06-05
| | | | | | | | | | | | | | | | | | | | | | | | The planner is aware that it mustn't push down upper-level quals into subqueries if the quals reference subquery output columns that contain set-returning functions or volatile functions, or are non-DISTINCT outputs of a DISTINCT ON subquery. However, it missed making this check when there were one or more levels of UNION or INTERSECT above the dangerous expression. This could lead to "set-valued function called in context that cannot accept a set" errors, as seen in bug #8213 from Eric Soroos, or to silently wrong answers in the other cases. To fix, refactor the checks so that we make the column-is-unsafe checks during subquery_is_pushdown_safe(), which already has to recursively inspect all arms of a set-operation tree. This makes qual_is_pushdown_safe() considerably simpler, at the cost that we will spend some cycles checking output columns that possibly aren't referenced in any upper qual. But the cases where this code gets executed at all are already nontrivial queries, so it's unlikely anybody will notice any slowdown of planning. This has been broken since commit 05f916e6add9726bf4ee046e4060c1b03c9961f2, which makes the bug over ten years old. A bit surprising nobody noticed it before now.
* Put analyze_keyword back in explain_option_name production.Tom Lane2013-06-05
| | | | | | | | | | | | | | | | | In commit 2c92edad48796119c83d7dbe6c33425d1924626d, I broke "EXPLAIN (ANALYZE)" syntax, because I mistakenly thought that ANALYZE/ANALYSE were only partially reserved and thus would be included in NonReservedWord; but actually they're fully reserved so they still need to be called out here. A nicer solution would be to demote these words to type_func_name_keyword status (they can't be less than that because of "VACUUM [ANALYZE] ColId"). While that works fine so far as the core grammar is concerned, it breaks ECPG's grammar for reasons I don't have time to isolate at the moment. So do this for the time being. Per report from Kevin Grittner. Back-patch to 9.0, like the previous commit.
* Provide better message when CREATE EXTENSION can't find a target schema.Tom Lane2013-06-04
| | | | | | | | | | | | | | The new message (and SQLSTATE) matches the corresponding error cases in namespace.c. This was thought to be a "can't happen" case when extension.c was written, so we didn't think hard about how to report it. But it definitely can happen in 9.2 and later, since we no longer require search_path to contain any valid schema names. It's probably also possible in 9.1 if search_path came from a noninteractive source. So, back-patch to all releases containing this code. Per report from Sean Chittenden, though this isn't exactly his patch.
* Fix memory leak in LogStandbySnapshot().Tom Lane2013-06-04
| | | | | | | | | | | | | | The array allocated by GetRunningTransactionLocks() needs to be pfree'd when we're done with it. Otherwise we leak some memory during each checkpoint, if wal_level = hot_standby. This manifests as memory bloat in the checkpointer process, or in bgwriter in versions before we made the checkpointer separate. Reported and fixed by Naoya Anzai. Back-patch to 9.0 where the issue was introduced. In passing, improve comments for GetRunningTransactionLocks(), and add an Assert that we didn't overrun the palloc'd array.
* Add semicolons to eval'd strings to hide a minor Perl behavioral change.Tom Lane2013-06-03
| | | | | | | | | | | | "eval q{foo}" used to complain that the error was on line 2 of the eval'd string, because eval internally tacked on "\n;" so that the end of the erroneous command was indeed on line 2. But as of Perl 5.18 it more sanely says that the error is on line 1. To avoid Perl-version-dependent regression test results, use "eval q{foo;}" instead in the two places where this matters. Per buildfarm. Since people might try to use newer Perl versions with older PG releases, back-patch as far as 9.0 where these test cases were added.
* Allow type_func_name_keywords in some places where they weren't before.Tom Lane2013-06-02
| | | | | | | | | | | | | | | This change makes type_func_name_keywords less reserved than they were before, by allowing them for role names, language names, EXPLAIN and COPY options, and SET values for GUCs; which are all places where few if any actual keywords could appear instead, so no new ambiguities are introduced. The main driver for this change is to allow "COPY ... (FORMAT BINARY)" to work without quoting the word "binary". That is an inconsistency that has been complained of repeatedly over the years (at least by Pavel Golub, Kurt Lidl, and Simon Riggs); but we hadn't thought of any non-ugly solution until now. Back-patch to 9.0 where the COPY (FORMAT BINARY) syntax was introduced.
* Fix typo in comment.Robert Haas2013-05-23
| | | | Pavan Deolasee
* Fix fd.c to preserve errno where needed.Tom Lane2013-05-16
| | | | | | | | | | | | | | | | | PathNameOpenFile failed to ensure that the correct value of errno was returned to its caller after a failure (because it incorrectly supposed that free() can never change errno). In some cases this would result in a user-visible failure because an expected ENOENT errno was replaced with something else. Bogus EINVAL failures have been observed on OS X, for example. There were also a couple of places that could mangle an important value of errno if FDDEBUG was defined. While the usefulness of that debug support is highly debatable, we might as well make it safe to use, so add errno save/restore logic to the DO_DB macro. Per bug #8167 from Nelson Minar, diagnosed by RhodiumToad. Back-patch to all supported branches.
* Fix handling of OID wraparound while in standalone mode.Tom Lane2013-05-13
| | | | | | | | | | | If OID wraparound should occur while in standalone mode (unlikely but possible), we want to advance the counter to FirstNormalObjectId not FirstBootstrapObjectId. Otherwise, user objects might be created with OIDs in the system-reserved range. That isn't immediately harmful but it poses a risk of conflicts during future pg_upgrade operations. Noted by Andres Freund. Back-patch to all supported branches, since all of them are supported sources for pg_upgrade operations.
* Guard against input_rows == 0 in estimate_num_groups().Tom Lane2013-05-10
| | | | | | | | | | | | | | | This case doesn't normally happen, because the planner usually clamps all row estimates to at least one row; but I found that it can arise when dealing with relations excluded by constraints. Without a defense, estimate_num_groups() can return zero, which leads to divisions by zero inside the planner as well as assertion failures in the executor. An alternative fix would be to change set_dummy_rel_pathlist() to make the size estimate for a dummy relation 1 row instead of 0, but that seemed pretty ugly; and probably someday we'll want to drop the convention that the minimum rowcount estimate is 1 row. Back-patch to 8.4, as the problem can be demonstrated that far back.
* Fix thinko in comment.Heikki Linnakangas2013-05-02
| | | | | | | WAL segment means a 16 MB physical WAL file; this comment meant a logical 4 GB log file. Amit Langote. Apply to backbranches only, as the comment is gone in master.
* Install recycled WAL segments with current timeline ID during recovery.Heikki Linnakangas2013-04-30
| | | | | | | | | | | | | | This is a follow-up to the earlier fix, which changed the recycling logic to recycle WAL segments under the current recovery target timeline. That turns out to be a bad idea, because installing a recycled segment with a TLI higher than what we're recovering at the moment means that the recovery logic will find the recycled WAL segment and try to replay it. It will fail, but but the mere presence of such a WAL segment will mask any other, real, file with the same log/seg, but smaller TLI. Per report from Mitsumasa Kondo. Apply to 9.1 and 9.2, like the previous fix. Master was already doing this differently; this patch makes 9.1 and 9.2 to do the same thing as master.
* Ensure ANALYZE phase is not skipped because of canceled truncate.Kevin Grittner2013-04-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | Patch b19e4250b45e91c9cbdd18d35ea6391ab5961c8d attempted to preserve existing behavior regarding statistics generation in the case that a truncation attempt was canceled due to lock conflicts. It failed to do this accurately in two regards: (1) autovacuum had previously generated statistics if the truncate attempt failed to initially get the lock rather than having started the attempt, and (2) the VACUUM ANALYZE command had always generated statistics. Both of these changes were unintended, and are reverted by this patch. On review, there seems to be consensus that the previous failure to generate statistics when the truncate was terminated was more an unfortunate consequence of how that effort was previously terminated than a feature we want to keep; so this patch generates statistics even when an autovacuum truncation attempt terminates early. Another unintended change which is kept on the basis that it is an improvement is that when a VACUUM command is truncating, it will the new heuristic for avoiding blocking other processes, rather than keeping an AccessExclusiveLock on the table for however long the truncation takes. Per multiple reports, with some renaming per patch by Jeff Janes. Backpatch to 9.0, where problem was created.
* Ensure that user created rows in extension tables get dumped if the table is ↵Joe Conway2013-04-26
| | | | | | explicitly requested, either with a -t/--table switch of the table itself, or by -n/--schema switch of the schema containing the extension table. Patch reviewed by Vibhor Kumar and Dimitri Fontaine. Backpatched to 9.1 when the extension management facility was added.
* Avoid deadlock between concurrent CREATE INDEX CONCURRENTLY commands.Tom Lane2013-04-25
| | | | | | | | | There was a high probability of two or more concurrent C.I.C. commands deadlocking just before completion, because each would wait for the others to release their reference snapshots. Fix by releasing the snapshot before waiting for other snapshots to go away. Per report from Paul Hinze. Back-patch to all active branches.
* Fix typo in comment.Heikki Linnakangas2013-04-25
| | | | Peter Geoghegan
* Fix longstanding race condition in plancache.c.Tom Lane2013-04-20
| | | | | | | | | | | | | | | | | | | When creating or manipulating a cached plan for a transaction control command (particularly ROLLBACK), we must not perform any catalog accesses, since we might be in an aborted transaction. However, plancache.c busily saved or examined the search_path for every cached plan. If we were unlucky enough to do this at a moment where the path's expansion into schema OIDs wasn't already cached, we'd do some catalog accesses; and with some more bad luck such as an ill-timed signal arrival, that could lead to crashes or Assert failures, as exhibited in bug #8095 from Nachiket Vaidya. Fortunately, there's no real need to consider the search path for such commands, so we can just skip the relevant steps when the subject statement is a TransactionStmt. This is somewhat related to bug #5269, though the failure happens during initial cached-plan creation rather than revalidation. This bug has been there since the plan cache was invented, so back-patch to all supported branches.
* Fix crash on compiling a regular expression with more than 32k colors.Heikki Linnakangas2013-04-04
| | | | | | Throw an error instead. Backpatch to all supported branches.
* Calculate # of semaphores correctly with --disable-spinlocks.Heikki Linnakangas2013-04-04
| | | | | | | | | | The old formula didn't take into account that each WAL sender process needs a spinlock. We had also already exceeded the fixed number of spinlocks reserved for misc purposes (10). Bump that to 30. Backpatch to 9.0, where WAL senders were introduced. If I counted correctly, 9.0 had exactly 10 predefined spinlocks, and 9.1 exceeded that, but bump the limit in 9.0 too because 10 is uncomfortably close to the edge.
* Minor robustness improvements for isolationtester.Tom Lane2013-04-02
| | | | | | | | | | | | | Notice and complain about PQcancel() failures. Also, don't dump core if an error PGresult doesn't contain severity and message subfields, as it might not if it was generated by libpq itself. (We have a longstanding TODO item to improve that, but in the meantime isolationtester had better cope.) I tripped across the latter item while investigating a trouble report on buildfarm member spoonbill. As for the former, there's no evidence that PQcancel failure is actually involved in spoonbill's problem, but it still seems like a bad idea to ignore an error return code.
* Stamp 9.1.9.REL9_1_9Tom Lane2013-04-01
|
* Fix insecure parsing of server command-line switches.Tom Lane2013-04-01
| | | | | | | | | | | | | | | | | | | | | | | | An oversight in commit e710b65c1c56ca7b91f662c63d37ff2e72862a94 allowed database names beginning with "-" to be treated as though they were secure command-line switches; and this switch processing occurs before client authentication, so that even an unprivileged remote attacker could exploit the bug, needing only connectivity to the postmaster's port. Assorted exploits for this are possible, some requiring a valid database login, some not. The worst known problem is that the "-r" switch can be invoked to redirect the process's stderr output, so that subsequent error messages will be appended to any file the server can write. This can for example be used to corrupt the server's configuration files, so that it will fail when next restarted. Complete destruction of database tables is also possible. Fix by keeping the database name extracted from a startup packet fully separate from command-line switches, as had already been done with the user name field. The Postgres project thanks Mitsumasa Kondo for discovering this bug, Kyotaro Horiguchi for drafting the fix, and Noah Misch for recognizing the full extent of the danger. Security: CVE-2013-1899
* Make REPLICATION privilege checks test current user not authenticated user.Tom Lane2013-04-01
| | | | | | | | | | | The pg_start_backup() and pg_stop_backup() functions checked the privileges of the initially-authenticated user rather than the current user, which is wrong. For example, a user-defined index function could successfully call these functions when executed by ANALYZE within autovacuum. This could allow an attacker with valid but low-privilege database access to interfere with creation of routine backups. Reported and fixed by Noah Misch. Security: CVE-2013-1901
* Translation updatesPeter Eisentraut2013-03-31
|
* Ignore extra subquery outputs in set_subquery_size_estimates().Tom Lane2013-03-31
| | | | | | | | | | | | In commit 0f61d4dd1b4f95832dcd81c9688dac56fd6b5687, I added code to copy up column width estimates for each column of a subquery. That code supposed that the subquery couldn't have any output columns that didn't correspond to known columns of the current query level --- which is true when a query is parsed from scratch, but the assumption fails when planning a view that depends on another view that's been redefined (adding output columns) since the upper view was made. This results in an assertion failure or even a crash, as per bug #8025 from lindebg. Remove the Assert and instead skip the column if its resno is out of the expected range.
* Translation updatesAlvaro Herrera2013-03-31
|
* Update time zone data files to tzdata release 2013b.Tom Lane2013-03-28
| | | | | DST law changes in Chile, Haiti, Morocco, Paraguay, some Russian areas. Historical corrections for numerous places.
* Reset OpenSSL randomness state in each postmaster child process.Tom Lane2013-03-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if the postmaster initialized OpenSSL's PRNG (which it will do when ssl=on in postgresql.conf), the same pseudo-random state would be inherited by each forked child process. The problem is masked to a considerable extent if the incoming connection uses SSL encryption, but when it does not, identical pseudo-random state is made available to functions like contrib/pgcrypto. The process's PID does get mixed into any requested random output, but on most systems that still only results in 32K or so distinct random sequences available across all Postgres sessions. This might allow an attacker who has database access to guess the results of "secure" operations happening in another session. To fix, forcibly reset the PRNG after fork(). Each child process that has need for random numbers from OpenSSL's generator will thereby be forced to go through OpenSSL's normal initialization sequence, which should provide much greater variability of the sequences. There are other ways we might do this that would be slightly cheaper, but this approach seems the most future-proof against SSL-related code changes. This has been assigned CVE-2013-1900, but since the issue and the patch have already been publicized on pgsql-hackers, there's no point in trying to hide this commit. Back-patch to all supported branches. Marko Kreen
* Fix buffer pin leak in heap update redo routine.Heikki Linnakangas2013-03-27
| | | | | | | | | | | | | | In a heap update, if the old and new tuple were on different pages, and the new page no longer existed (because it was subsequently truncated away by vacuum), heap_xlog_update forgot to release the pin on the old buffer. This bug was introduced by the "Fix multiple problems in WAL replay" patch, commit 3bbf668de9f1bc172371681e80a4e769b6d014c8 (on master branch). With full_page_writes=off, this triggered an "incorrect local pin count" error later in replay, if the old page was vacuumed. This fixes bug #7969, reported by Yunong Xiao. Backpatch to 9.0, like the commit that introduced this bug.
* Ignore invalid indexes in pg_dump.Tom Lane2013-03-26
| | | | | | | | | | | | | | Dumping invalid indexes can cause problems at restore time, for example if the reason the index creation failed was because it tried to enforce a uniqueness condition not satisfied by the table's data. Also, if the index creation is in fact still in progress, it seems reasonable to consider it to be an uncommitted DDL change, which pg_dump wouldn't be expected to dump anyway. Back-patch to all active versions, and teach them to ignore invalid indexes in servers back to 8.2, where the concept was introduced. Michael Paquier
* In base backup, only include our own tablespace version directory.Heikki Linnakangas2013-03-25
| | | | | | | | If you have clusters of different versions pointing to the same tablespace location, we would incorrectly include all the data belonging to the other versions, too. Fixes bug #7986, reported by Sergey Burladyan.
* Add a server version check to pg_basebackup and pg_receivexlog.Heikki Linnakangas2013-03-25
| | | | | | | | | | | | | | | | | | | These programs don't work against 9.0 or earlier servers, so check that when the connection is made. That's better than a cryptic error message you got before. Also, these programs won't work with a 9.3 server, because the WAL streaming protocol was changed in a non-backwards-compatible way. As a general rule, we don't make any guarantee that an old client will work with a new server, so check that. However, allow a 9.1 client to connect to a 9.2 server, to avoid breaking environments that currently work; a 9.1 client happens to work with a 9.2 server, even though we didn't make any great effort to ensure that. This patch is for the 9.1 and 9.2 branches, I'll commit a similar patch to master later. Although this isn't a critical bug fix, it seems safe enough to back-patch. The error message you got when connecting to a 9.3devel server without this patch was cryptic enough to warrant backpatching.
* Update time zone abbreviation lists for changes missed since 2006.Tom Lane2013-03-23
| | | | | | | | | | | | | | Most (all?) of Russia has moved to what's effectively year-round daylight savings time, so that the "standard" zone names now mean an hour later than they used to. Update that, notably changing MSK as per recent complaint from Sergey Konoplev, but also CHOT, GET, IRKT, KGT, KRAT, MAGT, NOVT, OMST, VLAT, YAKT, YEKT. The corresponding DST abbreviations are presumably now obsolete, but I left them in place with their old definitions, just to reduce any possible breakage from this change. Also add VOLT (Europe/Volgograd), which for some reason we never had before, as well as MIST (Antarctica/Macquarie), and fix obsolete definitions of MAWT, TKT, and WST.
* Fix race condition in DELETE RETURNING.Tom Lane2013-03-10
| | | | | | | | | | | | | | | | When RETURNING is specified, ExecDelete would return a virtual-tuple slot that could contain pointers into an already-unpinned disk buffer. Another process could change the buffer contents before we get around to using the data, resulting in garbage results or even a crash. This seems of fairly low probability, which may explain why there are no known field reports of the problem, but it's definitely possible. Fix by forcing the result slot to be "materialized" before we release pin on the disk buffer. Back-patch to 9.0; in earlier branches there is no bug because ExecProcessReturning sent the tuple to the destination immediately. Also, this is already fixed in HEAD as part of the writable-foreign-tables patch (where the fix is necessary for DELETE RETURNING to work at all with postgres_fdw).
* Fix infinite-loop risk in fixempties() stage of regex compilation.Tom Lane2013-03-07
| | | | | | | | | | | The previous coding of this function could get into situations where it would never terminate, because successive passes would re-add EMPTY arcs that had been removed by the previous pass. Rewrite the function completely using a new algorithm that is guaranteed to terminate, and also seems to be usually faster than the old one. Per Tcl bugs 3604074 and 3606683. Tom Lane and Don Porter
* Fix to_char() to use ASCII-only case-folding rules where appropriate.Tom Lane2013-03-05
| | | | | | | | | | | | | | formatting.c used locale-dependent case folding rules in some code paths where the result isn't supposed to be locale-dependent, for example to_char(timestamp, 'DAY'). Since the source data is always just ASCII in these cases, that usually didn't matter ... but it does matter in Turkish locales, which have unusual treatment of "i" and "I". To confuse matters even more, the misbehavior was only visible in UTF8 encoding, because in single-byte encodings we used pg_toupper/pg_tolower which don't have locale-specific behavior for ASCII characters. Fix by providing intentionally ASCII-only case-folding functions and using these where appropriate. Per bug #7913 from Adnan Dursun. Back-patch to all active branches, since it's been like this for a long time.
* Fix overflow check in tm2timestamp (this time for sure).Tom Lane2013-03-04
| | | | | | I fixed this code back in commit 841b4a2d5, but didn't think carefully enough about the behavior near zero, which meant it improperly rejected 1999-12-31 24:00:00. Per report from Magnus Hagander.
* Eliminate memory leaks in plperl's spi_prepare() function.Tom Lane2013-03-01
| | | | | | | | | | | | | | Careless use of TopMemoryContext for I/O function data meant that repeated use of spi_prepare and spi_freeplan would leak memory at the session level, as per report from Christian Schröder. In addition, spi_prepare leaked a lot of transient data within the current plperl function's SPI Proc context, which would be a problem for repeated use of spi_prepare within a single plperl function call; and it wasn't terribly careful about releasing permanent allocations in event of an error, either. In passing, clean up some copy-and-pasteos in query-lookup error messages. Alex Hunsaker and Tom Lane
* Add missing error check in regexp parser.Tom Lane2013-02-27
| | | | | | | | | | | | parseqatom() failed to check for an error return (NULL result) from its recursive call to parsebranch(), and in consequence could crash with a null-pointer dereference after an error return. This bug has been there since day one, but wasn't noticed before, probably because most error cases in parsebranch() didn't actually lead to returning NULL. Add the missing error check, and also tweak parsebranch() to exit in a less indirect fashion after a call to parseqatom() fails. Report by Tomasz Karlik, fix by me.
* Correct tense in log messagePeter Eisentraut2013-02-23
|
* Fix pg_dumpall with database names containing =Heikki Linnakangas2013-02-20
| | | | | | | | | | | | If a database name contained a '=' character, pg_dumpall failed. The problem was in the way pg_dumpall passes the database name to pg_dump on the command line. If it contained a '=' character, pg_dump would interpret it as a libpq connection string instead of a plain database name. To fix, pass the database name to pg_dump as a connection string, "dbname=foo", with the database name escaped if necessary. Back-patch to all supported branches.
* Don't pass NULL to fprintf, if a bogus connection string is given to pg_dump.Heikki Linnakangas2013-02-20
| | | | Back-patch to all supported branches.
* Fix bogus when-to-deregister-from-listener-array logic.Tom Lane2013-02-13
| | | | | | | | | | | | | | | | | | Since a backend adds itself to the global listener array during Exec_ListenPreCommit, it's inappropriate for it to remove itself during Exec_UnlistenCommit or Exec_UnlistenAllCommit --- that leads to failure when committing a transaction that did UNLISTEN then LISTEN, since we end up not registered though we should be. (This leads to missing later notifications, or to Assert failures in assert-enabled builds.) Instead deal with deregistering at the bottom of AtCommit_Notify, when we know the final state of the listenChannels list. Also, simplify the representation of registration status by replacing the transient backendHasExecutedInitialListen flag with an amRegisteredListener flag. Per report from Greg Sabino Mullane. Back-patch to 9.0, where the problem was introduced during the LISTEN/NOTIFY rewrite.
* Further cleanup of gistsplit.c.Tom Lane2013-02-10
| | | | | | | | | | | | | | | | | | | | After further reflection I was unconvinced that the existing coding is guaranteed to return valid union datums in every code path for multi-column indexes. Fix that by forcing a gistunionsubkey() call at the end of the recursion. Having done that, we can remove some clearly-redundant calls elsewhere. This should be a little faster for multi-column indexes (since the previous coding would uselessly do such a call for each column while unwinding the recursion), as well as much harder to break. Also, simplify the handling of cases where one side or the other of a primary split contains only don't-care tuples. The previous coding used a very ugly hack in removeDontCares() that essentially forced one random tuple to be treated as non-don't-care, providing a random initial choice of seed datum for the secondary split. It seems unlikely that that method will give better-than-random splits. Instead, treat such a split as degenerate and just let the next column determine the split, the same way that we handle fully degenerate cases where the two sides produce identical union datums.
* Remove useless picksplit-doesn't-support-secondary-split log spam.Tom Lane2013-02-10
| | | | | | | | | | | | | | This LOG message was put in over five years ago with the evident expectation that we'd make all GiST opclasses support secondary split directly. However, no such thing ever happened, and indeed the number of opclasses supporting it decreased to zero in 9.2. The reason is that improving on the default implementation isn't that easy --- the opclass-specific code that did exist, before 9.2, doesn't appear to have been any improvement over the default. Hence, remove the message altogether. There's certainly no point in nagging users about this in released branches, but I doubt that we'll ever implement complete opclass-specific support anyway.
* Document and clean up gistsplit.c.Tom Lane2013-02-10
| | | | | | | | | | Improve comments, rename some variables and functions, slightly simplify a couple of APIs, in an attempt to make this code readable by people other than its original author. Even though this is essentially just cosmetic, back-patch to all active branches, because otherwise it's going to make back-patching future fixes in this file very painful.
* Fix gist_box_same and gist_point_consistent to handle fuzziness correctly.Tom Lane2013-02-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | While there's considerable doubt that we want fuzzy behavior in the geometric operators at all (let alone as currently implemented), nobody is stepping forward to redesign that stuff. In the meantime it behooves us to make sure that index searches agree with the behavior of the underlying operators. This patch fixes two problems in this area. First, gist_box_same was using fuzzy equality, but it really needs to use exact equality to prevent not-quite-identical upper index keys from being treated as identical, which for example would prevent an existing upper key from being extended by an amount less than epsilon. This would result in inconsistent indexes. (The next release notes will need to recommend that users reindex GiST indexes on boxes, polygons, circles, and points, since all four opclasses use gist_box_same.) Second, gist_point_consistent used exact comparisons for upper-page comparisons in ~= searches, when it needs to use fuzzy comparisons to ensure it finds all matches; and it used fuzzy comparisons for point <@ box searches, when it needs to use exact comparisons because that's what the <@ operator (rather inconsistently) does. The added regression test cases illustrate all three misbehaviors. Back-patch to all active branches. (8.4 did not have GiST point_ops, but it still seems prudent to apply the gist_box_same patch to it.) Alexander Korotkov, reviewed by Noah Misch
* Repair bugs in GiST page splitting code for multi-column indexes.Tom Lane2013-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When considering a non-last column in a multi-column GiST index, gistsplit.c tries to improve on the split chosen by the opclass-specific pickSplit function by considering penalties for the next column. However, there were two bugs in this code: it failed to recompute the union keys for the leftmost index columns, even though these might well change after reassigning tuples; and it included the old union keys in the recomputation for the columns it did recompute, so that those keys couldn't get smaller even if they should. The first problem could result in an invalid index in which searches wouldn't find index entries that are in fact present; the second would make the index less efficient to search. Both of these errors were caused by misuse of gistMakeUnionItVec, whose API was designed in a way that just begged such errors to be made. There is no situation in which it's safe or useful to compute the union keys for a subset of the index columns, and there is no caller that wants any previous union keys to be included in the computation; so the undocumented choice to treat the union keys as in/out rather than pure output parameters is a waste of code as well as being dangerous. Hence, rather than just making a minimal patch, I've changed the API of gistMakeUnionItVec to remove the "startkey" parameter (it now always processes all index columns) and treat the attr/isnull arrays as purely output parameters. In passing, also get rid of a couple of unnecessary and dangerous uses of static variables in gistutil.c. It's remarkable that the one in gistMakeUnionKey hasn't given us portability troubles before now, because in addition to posing a re-entrancy hazard, it was unsafely assuming that a static char[] array would have at least Datum alignment. Per investigation of a trouble report from Tomas Vondra. (There are also some bugs in contrib/btree_gist to be fixed, but that seems like material for a separate patch.) Back-patch to all supported branches.
* Fix possible failure to send final transaction counts to stats collector.Tom Lane2013-02-07
| | | | | | | | | | | | | Normally, we suppress sending a tabstats message to the collector unless there were some actual table stats to send. However, during backend exit we should force out the message if there are any transaction commit/abort counts to send, else the session's last few commit/abort counts will never get reported at all. We had logic for this, but the short-circuit test at the top of pgstat_report_stat() ignored the "force" flag, with the consequence that session-ending transactions that touched no database-local tables would not get counted. Seems to be an oversight in my commit 641912b4d17fd214a5e5bae4e7bb9ddbc28b144b, which added the "force" flag. That was back in 8.3, so back-patch to all supported versions.
* Stamp 9.1.8.REL9_1_8Tom Lane2013-02-04
|