aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils
Commit message (Collapse)AuthorAge
* Emit invalidations to standby for transactions without xid.Andres Freund2016-04-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | So far, when a transaction with pending invalidations, but without an assigned xid, committed, we simply ignored those invalidation messages. That's problematic, because those are actually sent for a reason. Known symptoms of this include that existing sessions on a hot-standby replica sometimes fail to notice new concurrently built indexes and visibility map updates. The solution is to WAL log such invalidations in transactions without an xid. We considered to alternatively force-assign an xid, but that'd be problematic for vacuum, which might be run in systems with few xids. Important: This adds a new WAL record, but as the patch has to be back-patched, we can't bump the WAL page magic. This means that standbys have to be updated before primaries; otherwise "PANIC: standby_redo: unknown op code 32" errors can be encountered. XXX: Reported-By: Васильев Дмитрий, Masahiko Sawada Discussion: CAB-SwXY6oH=9twBkXJtgR4UC1NqT-vpYAtxCseME62ADwyK5OA@mail.gmail.com CAD21AoDpZ6Xjg=gFrGPnSn4oTRRcwK1EBrWCq9OqOHuAcMMC=w@mail.gmail.com
* Fix pg_get_functiondef to dump parallel-safety markings.Robert Haas2016-04-26
| | | | Ashutosh Sharma
* Yet more portability hacking for degree-based trig functions.Tom Lane2016-04-26
| | | | | | | | | | | | | The true explanation for Peter Eisentraut's report of inexact asind results seems to be that (a) he's compiling into x87 instruction set, which uses wider-than-double float registers, plus (b) the library function asin() on his platform returns a result that is wider than double and is not rounded to double width. To fix, we have to force the function's result to be rounded comparably to what happened to the scaling constant asin_0_5. Experimentation suggests that storing it into a volatile local variable is the least ugly way of making that happen. Although only asin() is known to exhibit an observable inexact result, we'd better do this in all the places where we're hoping to get an exact result by scaling.
* Enable parallel query by default.Robert Haas2016-04-26
| | | | | | | | | | Change max_parallel_degree default from 0 to 2. It is possible that this is not a good idea, or that we should go with 1 worker rather than 2, but we won't find out without trying it. Along the way, reword the documentation for max_parallel_degree a little bit to hopefully make it more clear. Discussion: 20160420174631.3qjjhpwsvvx5bau5@alap3.anarazel.de
* Fix C comment typo and redundant testKevin Grittner2016-04-25
|
* New method for preventing compile-time calculation of degree constants.Tom Lane2016-04-25
| | | | | | | | | | | | | Commit 65abaab547a5758b tried to prevent the scaling constants used in the degree-based trig functions from being precomputed at compile time, because some compilers do that with functions that don't yield results identical-to-the-last-bit to what you get at runtime. A report from Peter Eisentraut suggests that some recent compilers are smart enough to see through that trick, though. Instead, let's put the inputs to these calculations into non-const global variables, which should be a more reliable way of convincing the compiler that it can't assume that they are compile-time constants. (If we really get desperate, we could mark these variables "volatile", but I do not believe we should have to.)
* Fix documentation & config inconsistencies around 428b1d6b2.Andres Freund2016-04-24
| | | | | | | | | | | | | Several issues: 1) checkpoint_flush_after doc and code disagreed about the default 2) new GUCs were missing from postgresql.conf.sample 3) Outdated source-code comment about bgwriter_flush_after's default 4) Sub-optimal categories assigned to new GUCs 5) Docs suggested backend_flush_after is PGC_SIGHUP, but it's PGC_USERSET. 6) Spell out int as integer in the docs, as done elsewhere Reported-By: Magnus Hagander, Fujii Masao Discussion: CAHGQGwETyTG5VYQQ5C_srwxWX7RXvFcD3dKROhvAWWhoSBdmZw@mail.gmail.com
* Rename strtoi() to strtoint().Tom Lane2016-04-23
| | | | | | | | | | | | NetBSD has seen fit to invent a libc function named strtoi(), which conflicts with the long-established static functions of the same name in datetime.c and ecpg's interval.c. While muttering darkly about intrusions on application namespace, we'll rename our functions to avoid the conflict. Back-patch to all supported branches, since this would affect attempts to build any of them on recent NetBSD. Thomas Munro
* Properly mark initRectBox() as taking 'void' argsBruce Momjian2016-04-23
| | | | | | Was part of box type in SP-GiST index patch. Reported-by: Emre Hasegeli
* Fix ruleutils.c's dumping of ScalarArrayOpExpr containing an EXPR_SUBLINK.Tom Lane2016-04-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | When we shoehorned "x op ANY (array)" into the SQL syntax, we created a fundamental ambiguity as to the proper treatment of a sub-SELECT on the righthand side: perhaps what's meant is to compare x against each row of the sub-SELECT's result, or perhaps the sub-SELECT is meant as a scalar sub-SELECT that delivers a single array value whose members should be compared against x. The grammar resolves it as the former case whenever the RHS is a select_with_parens, making the latter case hard to reach --- but you can get at it, with tricks such as attaching a no-op cast to the sub-SELECT. Parse analysis would throw away the no-op cast, leaving a parsetree with an EXPR_SUBLINK SubLink directly under a ScalarArrayOpExpr. ruleutils.c was not clued in on this fine point, and would naively emit "x op ANY ((SELECT ...))", which would be parsed as the first alternative, typically leading to errors like "operator does not exist: text = text[]" during dump/reload of a view or rule containing such a construct. To fix, emit a no-op cast when dumping such a parsetree. This might well be exactly what the user wrote to get the construct accepted in the first place; and even if she got there with some other dodge, it is a valid representation of the parsetree. Per report from Karl Czajkowski. He mentioned only a case involving RLS policies, but actually the problem is very old, so back-patch to all supported branches. Report: <20160421001832.GB7976@moraine.isi.edu>
* Prevent possible crash reading pg_stat_activity.Robert Haas2016-04-21
| | | | | | | | Also, avoid reading PGPROC's wait_event field twice, once for the wait event and again for the wait_event_type, because the value might change in the middle. Petr Jelinek and Robert Haas
* Use PG_INT32_MIN instead of reiterating the constant.Robert Haas2016-04-13
| | | | | | Makes no difference, but it's cleaner this way. Michael Paquier
* Fix whitespacePeter Eisentraut2016-04-11
|
* Make oldSnapshotControl a pointer to a volatile structureKevin Grittner2016-04-11
| | | | | | | | | It was incorrectly declared as a volatile pointer to a non-volatile structure. Eliminate the OldSnapshotControl struct definition; it is really not needed. Pointed out by Tom Lane. While at it, add OldSnapshotControlData to pgindent's list of structures.
* Fix whitespacePeter Eisentraut2016-04-11
|
* Add comment about intentional fallthrough in switch.Tom Lane2016-04-10
| | | | | | | Coverity complained about an apparent missing "break" in a switch added by bb140506df605fab. The human-readable comments are pretty clear that this is intentional, but add a standard /* FALL THRU */ comment to make it clear to tools too.
* Create default rolesStephen Frost2016-04-08
| | | | | | | | | | | | | This creates an initial set of default roles which administrators may use to grant access to, historically, superuser-only functions. Using these roles instead of granting superuser access reduces the number of superuser roles required for a system. Documention for each of the default roles has been added to user-manag.sgml. Bump catversion to 201604082, as we had a commit that bumped it to 201604081 and another that set it back to 201604071... Reviews by José Luis Tallón and Robert Haas
* Reserve the "pg_" namespace for rolesStephen Frost2016-04-08
| | | | | | | | | This will prevent users from creating roles which begin with "pg_" and will check for those roles before allowing an upgrade using pg_upgrade. This will allow for default roles to be provided at initdb time. Reviews by José Luis Tallón and Robert Haas
* Add the "snapshot too old" featureKevin Grittner2016-04-08
| | | | | | | | | | | | | | | | This feature is controlled by a new old_snapshot_threshold GUC. A value of -1 disables the feature, and that is the default. The value of 0 is just intended for testing. Above that it is the number of minutes a snapshot can reach before pruning and vacuum are allowed to remove dead tuples which the snapshot would otherwise protect. The xmin associated with a transaction ID does still protect dead tuples. A connection which is using an "old" snapshot does not get an error unless it accesses a page modified recently enough that it might not be able to produce accurate results. This is similar to the Oracle feature, and we use the same SQLSTATE and error message for compatibility.
* Revert CREATE INDEX ... INCLUDING ...Teodor Sigaev2016-04-08
| | | | | | It's not ready yet, revert two commits 690c543550b0d2852060c18d270cdb534d339d9a - unstable test output 386e3d7609c49505e079c40c65919d99feb82505 - patch itself
* Add combine functions for various floating-point aggregates.Robert Haas2016-04-08
| | | | | | | | | This allows parallel aggregation to use them. It may seem surprising that we use float8_combine for both float4_accum and float8_accum transition functions, but that's because those functions differ only in the type of the non-transition-state argument. Haribabu Kommi, reviewed by David Rowley and Tomas Vondra
* Restore original tsquery operation numbering.Teodor Sigaev2016-04-08
| | | | | | | | | As noticed by Tom Lane changing operation's number in commit bb140506df605fab58f48926ee1db1f80bdafb59 causes on-disk format incompatibility. Revert to previous numbering, that is reason to add special array to store priorities of operation. Also it reverts order of tsquery to previous. Author: Dmitry Ivanov
* CREATE INDEX ... INCLUDING (column[, ...])Teodor Sigaev2016-04-08
| | | | | | | | | | Now indexes (but only B-tree for now) can contain "extra" column(s) which doesn't participate in index structure, they are just stored in leaf tuples. It allows to use index only scan by using single index instead of two or more indexes. Author: Anastasia Lubennikova with minor editorializing by me Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
* Attempt to fix breakage due to declaration following code.Robert Haas2016-04-08
| | | | Per Tom Lane and the buildfarm.
* Rename comparePos() to compareWordEntryPos()Teodor Sigaev2016-04-08
| | | | | | | Rename comparePos() to compareWordEntryPos() to prevent export of too generic name. Per gripe from Tom Lane.
* Use quicksort, not replacement selection, for external sorting.Robert Haas2016-04-08
| | | | | | | | | | | We still use replacement selection for the first run of the sort only and only when the number of tuples is relatively small. Otherwise, the first run, and subsequent runs in all cases, are produced using quicksort. This tends to be faster except perhaps for very small amounts of working memory. Peter Geoghegan, reviewed by Tomas Vondra, Jeff Janes, Mithun Cy, Greg Stark, and me.
* Use Foreign Key relationships to infer multi-column join selectivitySimon Riggs2016-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | In cases where joins use multiple columns we currently assess each join separately causing gross mis-estimates for join cardinality. This patch adds use of FK information for the first time into the planner. When FKs are present and we have multi-column join information, plan estimates will be drastically improved. Cases with multiple FKs are handled, though partial matches are ignored currently. Net effect is substantial performance improvements for joins in many common cases. Additional planning time is isolated to cases that are currently performing poorly, measured at 0.08 - 0.15 ms. Please watch for planner performance regressions; circumstances seem unlikely but the law of unintended consequences may apply somewhen. Additional complex tests welcome to prove this before release. Tests can be performed using SET enable_fkey_estimates = on | off using scripts provided during Hackers discussions, message id: 552335D9.3090707@2ndquadrant.com Authors: Tomas Vondra and David Rowley Reviewed and tested by Simon Riggs, adding comments only
* Zeroing unused parts ducring tsquery construction.Teodor Sigaev2016-04-07
| | | | | Per investigation failure skink buildfarm member and RANDOMIZE_ALLOCATED_MEMORY help
* Phrase full text search.Teodor Sigaev2016-04-07
| | | | | | | | | | | | | Patch introduces new text search operator (<-> or <DISTANCE>) into tsquery. On-disk and binary in/out format of tsquery are backward compatible. It has two side effect: - change order for tsquery, so, users, who has a btree index over tsquery, should reindex it - less number of parenthesis in tsquery output, and tsquery becomes more readable Authors: Teodor Sigaev, Oleg Bartunov, Dmitry Ivanov Reviewers: Alexander Korotkov, Artur Zakirov
* Load FK defs into relcache for use by plannerSimon Riggs2016-04-07
| | | | | | | Fastpath ignores this if no triggers defined. Author: Tomas Vondra, with fastpath and comments added by me Reviewers: David Rowley, Simon Riggs
* Use GRANT system to manage access to sensitive functionsStephen Frost2016-04-06
| | | | | | | | | | | | Now that pg_dump will properly dump out any ACL changes made to functions which exist in pg_catalog, switch to using the GRANT system to manage access to those functions. This means removing 'if (!superuser()) ereport()' checks from the functions themselves and then REVOKEing EXECUTE right from 'public' for these functions in system_views.sql. Reviews by Alexander Korotkov, Jose Luis Tallon
* In pg_dump, include pg_catalog and extension ACLs, if changedStephen Frost2016-04-06
| | | | | | | | | | | | | | | | | Now that all of the infrastructure exists, add in the ability to dump out the ACLs of the objects inside of pg_catalog or the ACLs for objects which are members of extensions, but only if they have been changed from their original values. The original values are tracked in pg_init_privs. When pg_dump'ing 9.6-and-above databases, we will dump out the ACLs for all objects in pg_catalog and the ACLs for all extension members, where the ACL has been changed from the original value which was set during either initdb or CREATE EXTENSION. This should not change dumps against pre-9.6 databases. Reviews by Alexander Korotkov, Jose Luis Tallon
* Add jsonb_insertTeodor Sigaev2016-04-06
| | | | | | | | It inserts a new value into an jsonb array at arbitrary position or a new key to jsonb object. Author: Dmitry Dolgov Reviewers: Petr Jelinek, Vitaly Burovoy, Andrew Dunstan
* Support multiple synchronous standby servers.Fujii Masao2016-04-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously synchronous replication offered only the ability to confirm that all changes made by a transaction had been transferred to at most one synchronous standby server. This commit extends synchronous replication so that it supports multiple synchronous standby servers. It enables users to consider one or more standby servers as synchronous, and increase the level of transaction durability by ensuring that transaction commits wait for replies from all of those synchronous standbys. Multiple synchronous standby servers are configured in synchronous_standby_names which is extended to support new syntax of 'num_sync ( standby_name [ , ... ] )', where num_sync specifies the number of synchronous standbys that transaction commits need to wait for replies from and standby_name is the name of a standby server. The syntax of 'standby_name [ , ... ]' which was used in 9.5 or before is also still supported. It's the same as new syntax with num_sync=1. This commit doesn't include "quorum commit" feature which was discussed in pgsql-hackers. Synchronous standbys are chosen based on their priorities. synchronous_standby_names determines the priority of each standby for being chosen as a synchronous standby. The standbys whose names appear earlier in the list are given higher priority and will be considered as synchronous. Other standby servers appearing later in this list represent potential synchronous standbys. The regression test for multiple synchronous standbys is not included in this commit. It should come later. Authors: Sawada Masahiko, Beena Emerson, Michael Paquier, Fujii Masao Reviewed-By: Kyotaro Horiguchi, Amit Kapila, Robert Haas, Simon Riggs, Amit Langote, Thomas Munro, Sameer Thakur, Suraj Kharage, Abhijit Menon-Sen, Rajeev Rastogi Many thanks to the various individuals who were involved in discussing and developing this feature.
* Add parallel query support functions for assorted aggregates.Robert Haas2016-04-05
| | | | | | | | | | | | This lets us use parallel aggregate for a variety of useful cases that didn't work before, like sum(int8), sum(numeric), several versions of avg(), and various other functions. Add some regression tests, as well, testing the general sanity of these and future catalog entries. David Rowley, reviewed by Tomas Vondra, with a few further changes by me.
* Disallow newlines in parameter values to be set in ALTER SYSTEM.Tom Lane2016-04-04
| | | | | | | | | | | | | | | | | | As noted by Julian Schauder in bug #14063, the configuration-file parser doesn't support embedded newlines in string literals. While there might someday be a good reason to remove that restriction, there doesn't seem to be one right now. However, ALTER SYSTEM SET could accept strings containing newlines, since many of the variable-specific value-checking routines would just see a newline as whitespace. This led to writing a postgresql.auto.conf file that was broken and had to be removed manually. Pending a reason to work harder, just throw an error if someone tries this. In passing, fix several places in the ALTER SYSTEM logic that failed to provide an errcode() for an ereport(), and thus would falsely log the failure as an internal XX000 error. Back-patch to 9.4 where ALTER SYSTEM was introduced.
* Introduce a LOG_SERVER_ONLY ereport level, which is never sent to client.Tom Lane2016-04-04
| | | | | | | | | | | | | | | | This elevel is useful for logging audit messages and similar information that should not be passed to the client. It's equivalent to LOG in terms of decisions about logging priority in the postmaster log, but messages with this elevel will never be sent to the client. In the current implementation, it's just an alias for the longstanding COMMERROR elevel (or more accurately, we've made COMMERROR an alias for this). At some point it might be interesting to allow a LOG_ONLY flag to be attached to any elevel, but that would be considerably more complicated, and it's not clear there's enough use-cases to justify the extra work. For now, let's just take the easy 90% solution. David Steele, reviewed by Fabien Coelho, Petr Jelínek, and myself
* Improve estimate of distinct values in estimate_num_groups().Dean Rasheed2016-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When adjusting the estimate for the number of distinct values from a rel in a grouped query to take into account the selectivity of the rel's restrictions, use a formula that is less likely to produce under-estimates. The old formula simply multiplied the number of distinct values in the rel by the restriction selectivity, which would be correct if the restrictions were fully correlated with the grouping expressions, but can produce significant under-estimates in cases where they are not well correlated. The new formula is based on the random selection probability, and so assumes that the restrictions are not correlated with the grouping expressions. This is guaranteed to produce larger estimates, and of course risks over-estimating in cases where the restrictions are correlated, but that has less severe consequences than under-estimating, which might lead to a HashAgg that consumes an excessive amount of memory. This could possibly be improved upon in the future by identifying correlated restrictions and using a hybrid of the old and new formulae. Author: Tomas Vondra, with some hacking be me Reviewed-by: Mark Dilger, Alexander Korotkov, Dean Rasheed and Tom Lane Discussion: http://www.postgresql.org/message-id/flat/56CD0381.5060502@2ndquadrant.com
* Copyedit comments and documentation.Noah Misch2016-04-01
|
* Improve portability of I/O behavior for the geometric types.Tom Lane2016-03-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Formerly, the geometric I/O routines such as box_in and point_out relied directly on strtod() and sprintf() for conversion of the float8 component values of their data types. However, the behavior of those functions is pretty platform-dependent, especially for edge-case values such as infinities and NaNs. This was exposed by commit acdf2a8b372aec1d, which added test cases involving boxes with infinity endpoints, and immediately failed on Windows and AIX buildfarm members. We solved these problems years ago in the main float8in and float8out functions, so let's fix it by making the geometric types use that code instead of depending directly on the platform-supplied functions. To do this, refactor the float8in code so that it can be used to parse just part of a string, and as a convenience make the guts of float8out usable without going through DirectFunctionCall. While at it, get rid of geo_ops.c's fairly shaky assumptions about the maximum output string length for a double, by having it build results in StringInfo buffers instead of fixed-length strings. In passing, convert all the "invalid input syntax for type foo" messages in this area of the code into "invalid input syntax for type %s" to reduce the number of distinct translatable strings, per recent discussion. We would have needed a fair number of the latter anyway for code-sharing reasons, so we might as well just go whole hog. Note: this patch is by no means intended to guarantee that the geometric types uniformly behave sanely for infinity or NaN component values. But any bugs we have in that line were there all along, they were just harder to reach in a platform-independent way.
* Suppress uninitialized-variable warnings.Tom Lane2016-03-30
| | | | | | My compiler doesn't like the lack of initialization of "flag", and I think it's right: if there were zero keys we'd have an undefined result. The AND of zero items is TRUE, so initialize to TRUE.
* Introduce SP-GiST operator class over box.Teodor Sigaev2016-03-30
| | | | | | | | | | | | Patch implements quad-tree over boxes, naive approach of 2D quad tree will not work for any non-point objects because splitting space on node is not efficient. The idea of pathc is treating 2D boxes as 4D points, so, object will not overlap (in 4D space). The performance tests reveal that this technique especially beneficial with too much overlapping objects, so called "spaghetti data". Author: Alexander Lebedev with editorization by Emre Hasegeli and me
* Use traversalValue in SP-GiST range opclass.Teodor Sigaev2016-03-30
| | | | Author: Alexander Lebedev
* Add new replication mode synchronous_commit = 'remote_apply'.Robert Haas2016-03-29
| | | | | | | | | | | | | | | | | | | In this mode, the master waits for the transaction to be applied on the remote side, not just written to disk. That means that you can count on a transaction started on the standby to see all commits previously acknowledged by the master. To make this work, the standby sends a reply after replaying each commit record generated with synchronous_commit >= 'remote_apply'. This introduces a small inefficiency: the extra replies will be sent even by standbys that aren't the current synchronous standby. But previously-existing synchronous_commit levels make no attempt at all to optimize which replies are sent based on what the primary cares about, so this is no worse, and at least avoids any extra replies for people not using the feature at all. Thomas Munro, reviewed by Michael Paquier and by me. Some additional tweaks by me.
* Fix interval_mul() to not produce insane results.Tom Lane2016-03-29
| | | | | | | | | | | | | | | | | | | interval_mul() attempts to prevent its calculations from producing silly results, but it forgot that zero times infinity yields NaN in IEEE arithmetic. Hence, a case like '1 second'::interval * 'infinity'::float8 produced a NaN for the months product, which didn't trigger the range check, resulting in bogus and possibly platform-dependent output. This isn't terribly obvious to the naked eye because if you try that exact case, you get "interval out of range" which is what you expect --- but if you look closer, the error is coming from interval_out not interval_mul. interval_mul has allowed a bogus value into the system. Fix by adding isnan tests. Noted while testing Vitaly Burovoy's fix for infinity input to to_timestamp(). Given the lack of field complaints, I doubt this is worth a back-patch.
* Allow to_timestamp(float8) to convert float infinity to timestamp infinity.Tom Lane2016-03-29
| | | | | | | | | With the original SQL-function implementation, such cases failed because we don't support infinite intervals. Converting the function to C lets us bypass the interval representation, which should be a bit faster as well as more flexible. Vitaly Burovoy, reviewed by Anastasia Lubennikova
* Document errhidecontext() where it ought to be documented.Tom Lane2016-03-28
| | | | | Seems to have been missed when this function was added. Noted while looking at David Steele's proposal to add another similar function.
* Reset plan->row_security_env and planUserIdStephen Frost2016-03-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the plancache, we check if the environment we planned the query under has changed in a way which requires us to re-plan, such as when the user for whom the plan was prepared changes and RLS is being used (and, therefore, there may be different policies to apply). Unfortunately, while those values were set and checked, they were not being reset when the query was re-planned and therefore, in cases where we change role, re-plan, and then change role again, we weren't re-planning again. This leads to potentially incorrect policies being applied in cases where role-specific policies are used and a given query is planned under one role and then executed under other roles, which could happen under security definer functions or when a common user and query is planned initially and then re-used across multiple SET ROLEs. Further, extensions which made use of CopyCachedPlan() may suffer from similar issues as the RLS-related fields were not properly copied as part of the plan and therefore RevalidateCachedQuery() would copy in the current settings without invalidating the query. Fix by using the same approach used for 'search_path', where we set the correct values in CompleteCachedPlan(), check them early on in RevalidateCachedQuery() and then properly reset them if re-planning. Also, copy through the values during CopyCachedPlan(). Pointed out by Ashutosh Bapat. Reviewed by Michael Paquier. Back-patch to 9.5 where RLS was introduced. Security: CVE-2016-2193
* Code and docs review for commit 3187d6de0e5a9e805b27c48437897e8c39071d45.Tom Lane2016-03-28
| | | | | | | | | | | | | | | | | Fix up check for high-bit-set characters, which provoked "comparison is always true due to limited range of data type" warnings on some compilers, and was unlike the way we do it elsewhere anyway. Fix omission of "$" from the set of valid identifier continuation characters. Get rid of sanitize_text(), which was utterly inconsistent with any other error report anywhere in the system, and wasn't even well designed on its own terms (double-quoting the result string without escaping contained double quotes doesn't seem very well thought out). Fix up error messages, which didn't follow the message style guidelines very well, and were overly specific in situations where the actual mistake might not be what they said. Improve documentation. (I started out just intending to fix the compiler warning, but the more I looked at the patch the less I liked it.)
* Guard against zero vardata.rel->tuples in estimate_hash_bucketsize().Tom Lane2016-03-27
| | | | | | | | | | If the referenced rel was proven empty, we'd compute 0/0 here, which results in the function returning NaN. That's a bit more serious than the other zero-divide case. Still, it only seems to be possible in HEAD, so no back-patch. Per report from Piotr Stefaniak. I looked through the rest of selfuncs.c and found no other likely trouble spots.