| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were merely Assert'ing that the Var matched the RTE it's supposedly
from. But if the user passes incorrect information to pg_get_expr(),
the RTE might in fact not match; this led either to Assert failures
or core dumps, as reported by Chris Hanks in bug #14220. To fix, just
convert the Asserts to test-and-elog. Adjust an existing test-and-elog
elsewhere in the same function to be consistent in wording.
(If we really felt these were user-facing errors, we might promote them to
ereport's; but I can't convince myself that they're worth translating.)
Back-patch to 9.3; the problematic code doesn't exist before that, and
a quick check says that 9.2 doesn't crash on such cases.
Michael Paquier and Thomas Munro
Report: <20160629224349.1407.32667@wrigleys.postgresql.org>
|
|
|
|
|
|
|
|
|
|
| |
<-> operator now have higher predecence than & (AND) operator. This change
was motivated by unexpected difference of similar queries:
'a & b <-> c'::tsquery and 'b <-> c & a'. Before first query means
(a & b) <-> c and second one - '(b <-> c) & a', now phrase operator evaluates
first.
Per suggestion from Tom Lane 32260.1465402409@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
If there is no positional information of lexemes then phrase operator will not
fallback to AND operator. This change makes needing to modify TS_execute()
interface, because somewhere (in indexes, for example) positional information
is unaccesible and in this cases we need to force fallback to AND.
Per discussion c19fcfec308e6ccd952cdde9e648b505@mail.gmail.com
|
|
|
|
|
|
|
| |
Phrase operator now requires exact distance betweens lexems instead of
less-or-equal.
Per discussion c19fcfec308e6ccd952cdde9e648b505@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original coding had three separate booleans representing partial
aggregation behavior, which was confusing, unreadable, and error-prone,
not least because the booleans weren't always listed in the same order.
It was also inadequate for the allegedly-desirable future extension to
support intermediate partial aggregation, because we'd need separate
markers for serialization and deserialization in such a case.
Merge these bools into an enum "AggSplit" to provide symbolic names for
the supported operating modes (and document what those are). By assigning
the values of the enum constants carefully, we can treat AggSplit values
as options bitmasks so that tests of what to do aren't noticeably more
expensive than before.
While at it, get rid of Aggref.aggoutputtype. That's not needed since
commit 59a3795c2 got rid of setrefs.c's special-purpose Aggref comparison
code, and it likewise seemed more confusing than helpful.
Assorted comment cleanup as well (there's still more that I want to do
in that line).
catversion bump for change in Aggref node contents. Should be the last
one for partial-aggregation changes.
Discussion: <29309.1466699160@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A deserialize function's result is short-lived data during partial
aggregation, since we're just going to pass it to the combine function
and then it's of no use anymore. However, the built-in deserialize
functions allocated their results in the aggregate state context,
resulting in a query-lifespan memory leak. It's probably not possible for
this to amount to anything much at present, since the number of leaked
results would only be the number of worker processes. But it might become
a problem in future. To fix, don't use the same convenience subroutine for
setting up results that the aggregate transition functions use.
David Rowley
Report: <10050.1466637736@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original specification for this called for the deserialization function
to have signature "deserialize(serialtype) returns transtype", which is a
security violation if transtype is INTERNAL (which it always would be in
practice) and serialtype is not (which ditto). The patch blithely overrode
the opr_sanity check for that, which was sloppy-enough work in itself,
but the indisputable reason this cannot be allowed to stand is that CREATE
FUNCTION will reject such a signature and thus it'd be impossible for
extensions to create parallelizable aggregates.
The minimum fix to make the signature type-safe is to add a second, dummy
argument of type INTERNAL. But to lock it down a bit more and make misuse
of INTERNAL-accepting functions less likely, let's get rid of the ability
to specify a "serialtype" for an aggregate and just say that the only
useful serialtype is BYTEA --- which, in practice, is the only interesting
value anyway, due to the usefulness of the send/recv infrastructure for
this purpose. That means we only have to allow "serialize(internal)
returns bytea" and "deserialize(bytea, internal) returns internal" as
the signatures for these support functions.
In passing fix bogus signature of int4_avg_combine, which I found thanks
to adding an opr_sanity check on combinefunc signatures.
catversion bump due to removing pg_aggregate.aggserialtype and adjusting
signatures of assorted built-in functions.
David Rowley and Tom Lane
Discussion: <27247.1466185504@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
| |
The inet/cidr types sometimes failed to reject IPv6 inputs with too many
colon-separated fields, instead translating them to '::/0'. This is the
result of a thinko in the original ISC code that seems to be as yet
unreported elsewhere. Per bug #14198 from Stefan Kaltenbrunner.
Report: <20160616182222.5798.959@wrigleys.postgresql.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Transmit the leader's temp-namespace state to workers. This is important
because without it, the workers do not really have the same search path
as the leader. For example, there is no good reason (and no extant code
either) to prevent a worker from executing a temp function that the
leader created previously; but as things stood it would fail to find the
temp function, and then either fail or execute the wrong function entirely.
We still prohibit a worker from creating a temp namespace on its own.
In effect, a worker can only see the session's temp namespace if the leader
had created it before starting the worker, which seems like the right
semantics.
Also, transmit the leader's BackendId to workers, and arrange for workers
to use that when determining the physical file path of a temp relation
belonging to their session. While the original intent was to prevent such
accesses entirely, there were a number of holes in that, notably in places
like dbsize.c which assume they can safely access temp rels of other
sessions anyway. We might as well get this right, as a small down payment
on someday allowing workers to access the leader's temp tables. (With
this change, directly using "MyBackendId" as a relation or buffer backend
ID is deprecated; you should use BackendIdForTempRelations() instead.
I left a couple of such uses alone though, as they're not going to be
reachable in parallel workers until we do something about localbuf.c.)
Move the thou-shalt-not-access-thy-leader's-temp-tables prohibition down
into localbuf.c, which is where it actually matters, instead of having it
in relation_open(). This amounts to recognizing that access to temp
tables' catalog entries is perfectly safe in a worker, it's only the data
in local buffers that is problematic.
Having done all that, we can get rid of the test in has_parallel_hazard()
that says that use of a temp table's rowtype is unsafe in parallel workers.
That test was unduly expensive, and if we really did need such a
prohibition, that was not even close to being a bulletproof guard for it.
(For example, any user-defined function executed in a parallel worker
might have attempted such access.)
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This attempts to buy back some of whatever performance we lost from fixing
bug #14174 by inlining the initial checks in MakeExpandedObjectReadOnly()
into the callers. We can do that in a macro without creating multiple-
evaluation hazards, so it's pretty much free notationally; and the amount
of code added to callers should be minimal as well. (Testing a value can't
take many more instructions than passing it to a subroutine.)
Might as well inline DatumIsReadWriteExpandedObject() while we're at it.
This is an ABI break for callers, so it doesn't seem safe to put into 9.5,
but I see no reason not to do it in HEAD.
|
|
|
|
|
|
|
|
|
|
| |
Mostly these are just comments but there are a few in documentation
and a handful in code and tests. Hopefully this doesn't cause too much
unnecessary pain for backpatching. I relented from some of the most
common like "thru" for that reason. The rest don't seem numerous
enough to cause problems.
Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
subquery_planner() failed to apply expression preprocessing to the
arbiterElems and arbiterWhere fields of an OnConflictExpr. No doubt the
theory was that this wasn't necessary because we don't actually try to
execute those expressions; but that's wrong, because it results in failure
to match to index expressions or index predicates that are changed at all
by preprocessing. Per bug #14132 from Reynold Smith.
Also add pullup_replace_vars processing for onConflictWhere. Perhaps
it's impossible to have a subquery reference there, but I'm not exactly
convinced; and even if true today it's a failure waiting to happen.
Also add some comments to other places where one or another field of
OnConflictExpr is intentionally ignored, with explanation as to why it's
okay to do so.
Also, catalog/dependency.c failed to record any dependency on the named
constraint in ON CONFLICT ON CONSTRAINT, allowing such a constraint to
be dropped while rules exist that depend on it, and allowing pg_dump to
dump such a rule before the constraint it refers to. The normal execution
path managed to error out reasonably for a dangling constraint reference,
but ruleutils.c dumped core; so in addition to fixing the omission, add
a protective check in ruleutils.c, since we can't retroactively add a
dependency in existing databases.
Back-patch to 9.5 where this code was introduced.
Report: <20160510190350.2608.48667@wrigleys.postgresql.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to_timestamp() handles the TH/th format codes by advancing over two input
characters, whatever those are. It failed to notice whether there were
two characters available to be skipped, making it possible to advance
the pointer past the end of the input string and keep on parsing.
A similar risk existed in the handling of "Y,YYY" format: it would advance
over three characters after the "," whether or not three characters were
available.
In principle this might be exploitable to disclose contents of server
memory. But the security team concluded that it would be very hard to use
that way, because the parsing loop would stop upon hitting any zero byte,
and TH/th format codes can't be consecutive --- they have to follow some
other format code, which would have to match whatever data is there.
So it seems impractical to examine memory very much beyond the end of the
input string via this bug; and the input string will always be in local
memory not in disk buffers, making it unlikely that anything very
interesting is close to it in a predictable way. So this doesn't quite
rise to the level of needing a CVE.
Thanks to Wolf Roediger for reporting this bug.
|
|
|
|
|
|
|
|
|
| |
The similarity of the original names to SQL keywords seems like a bad
idea. Rename them before we're stuck with 'em forever.
In passing, minor code and docs cleanup.
Discussion: <4875.1462210058@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 7d9a4737c268f61fb8800957631f12d3f13be218 greatly improved the
accuracy of the numeric transcendental functions, however it failed to
consider the case where the result from pow() is close to the overflow
threshold, for example 0.12 ^ -2345.6. For such inputs, where the
result has more than 2000 digits before the decimal point, the decimal
result weight estimate was being clamped to 2000, leading to a loss of
precision in the final calculation.
Fix this by replacing the clamping code with an overflow test that
aborts the calculation early if the final result is sure to overflow,
based on the overflow limit in exp_var(). This provides the same
protection against integer overflow in the subsequent result scale
computation as the original clamping code, but it also ensures that
precision is never lost and saves compute cycles in cases that are
sure to overflow.
The new early overflow test works with the initial low-precision
result (expected to be accurate to around 8 significant digits) and
includes a small fuzz factor to ensure that it doesn't kick in for
values that would not overflow exp_var(), so the overall overflow
threshold of pow() is unchanged and consistent for all inputs with
non-integer exponents.
Author: Dean Rasheed
Reviewed-by: Tom Lane
Discussion: http://www.postgresql.org/message-id/CAEZATCUj3U-cQj0jjoia=qgs0SjE3auroxh8swvNKvZWUqegrg@mail.gmail.com
See-also: http://www.postgresql.org/message-id/CAEZATCV7w+8iB=07dJ8Q0zihXQT1semcQuTeK+4_rogC_zq5Hw@mail.gmail.com
|
|
|
|
|
|
|
| |
Variable storing a position of lexeme, had a wrong type: char, it's
obviously not enough to store 2^14 possible positions.
Stas Kelvich
|
|
|
|
|
|
|
|
| |
These adjustments adjust code and comments in minor ways to prevent
pgindent from mangling them. Among other things, I tried to avoid
situations where pgindent would emit "a +b" instead of "a + b", and I
tried to avoid having it break up inline comments across multiple
lines.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The way that PartialAggregate and FinalizeAggregate plan nodes were
displaying output columns before was bogus. Now, FinalizeAggregate
produces the same outputs as an Aggregate would have produced, while
PartialAggregate produces each of those outputs prefixed by the word
PARTIAL.
Discussion: 12585.1460737650@sss.pgh.pa.us
Patch by me, reviewed by David Rowley.
|
|
|
|
| |
Ashutosh Sharma
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The true explanation for Peter Eisentraut's report of inexact asind results
seems to be that (a) he's compiling into x87 instruction set, which uses
wider-than-double float registers, plus (b) the library function asin() on
his platform returns a result that is wider than double and is not rounded
to double width. To fix, we have to force the function's result to be
rounded comparably to what happened to the scaling constant asin_0_5.
Experimentation suggests that storing it into a volatile local variable is
the least ugly way of making that happen. Although only asin() is known to
exhibit an observable inexact result, we'd better do this in all the places
where we're hoping to get an exact result by scaling.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 65abaab547a5758b tried to prevent the scaling constants used in
the degree-based trig functions from being precomputed at compile time,
because some compilers do that with functions that don't yield results
identical-to-the-last-bit to what you get at runtime. A report from
Peter Eisentraut suggests that some recent compilers are smart enough
to see through that trick, though. Instead, let's put the inputs to
these calculations into non-const global variables, which should be a
more reliable way of convincing the compiler that it can't assume that
they are compile-time constants. (If we really get desperate, we could
mark these variables "volatile", but I do not believe we should have to.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
NetBSD has seen fit to invent a libc function named strtoi(), which
conflicts with the long-established static functions of the same name in
datetime.c and ecpg's interval.c. While muttering darkly about intrusions
on application namespace, we'll rename our functions to avoid the conflict.
Back-patch to all supported branches, since this would affect attempts
to build any of them on recent NetBSD.
Thomas Munro
|
|
|
|
|
|
| |
Was part of box type in SP-GiST index patch.
Reported-by: Emre Hasegeli
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we shoehorned "x op ANY (array)" into the SQL syntax, we created a
fundamental ambiguity as to the proper treatment of a sub-SELECT on the
righthand side: perhaps what's meant is to compare x against each row of
the sub-SELECT's result, or perhaps the sub-SELECT is meant as a scalar
sub-SELECT that delivers a single array value whose members should be
compared against x. The grammar resolves it as the former case whenever
the RHS is a select_with_parens, making the latter case hard to reach ---
but you can get at it, with tricks such as attaching a no-op cast to the
sub-SELECT. Parse analysis would throw away the no-op cast, leaving a
parsetree with an EXPR_SUBLINK SubLink directly under a ScalarArrayOpExpr.
ruleutils.c was not clued in on this fine point, and would naively emit
"x op ANY ((SELECT ...))", which would be parsed as the first alternative,
typically leading to errors like "operator does not exist: text = text[]"
during dump/reload of a view or rule containing such a construct. To fix,
emit a no-op cast when dumping such a parsetree. This might well be
exactly what the user wrote to get the construct accepted in the first
place; and even if she got there with some other dodge, it is a valid
representation of the parsetree.
Per report from Karl Czajkowski. He mentioned only a case involving
RLS policies, but actually the problem is very old, so back-patch to
all supported branches.
Report: <20160421001832.GB7976@moraine.isi.edu>
|
|
|
|
|
|
|
|
| |
Also, avoid reading PGPROC's wait_event field twice, once for the wait
event and again for the wait_event_type, because the value might change
in the middle.
Petr Jelinek and Robert Haas
|
|
|
|
|
|
| |
Makes no difference, but it's cleaner this way.
Michael Paquier
|
| |
|
|
|
|
|
|
|
| |
Coverity complained about an apparent missing "break" in a switch
added by bb140506df605fab. The human-readable comments are pretty
clear that this is intentional, but add a standard /* FALL THRU */
comment to make it clear to tools too.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This creates an initial set of default roles which administrators may
use to grant access to, historically, superuser-only functions. Using
these roles instead of granting superuser access reduces the number of
superuser roles required for a system. Documention for each of the
default roles has been added to user-manag.sgml.
Bump catversion to 201604082, as we had a commit that bumped it to
201604081 and another that set it back to 201604071...
Reviews by José Luis Tallón and Robert Haas
|
|
|
|
|
|
|
|
|
| |
This will prevent users from creating roles which begin with "pg_" and
will check for those roles before allowing an upgrade using pg_upgrade.
This will allow for default roles to be provided at initdb time.
Reviews by José Luis Tallón and Robert Haas
|
|
|
|
|
|
| |
It's not ready yet, revert two commits
690c543550b0d2852060c18d270cdb534d339d9a - unstable test output
386e3d7609c49505e079c40c65919d99feb82505 - patch itself
|
|
|
|
|
|
|
|
|
| |
This allows parallel aggregation to use them. It may seem surprising
that we use float8_combine for both float4_accum and float8_accum
transition functions, but that's because those functions differ only
in the type of the non-transition-state argument.
Haribabu Kommi, reviewed by David Rowley and Tomas Vondra
|
|
|
|
|
|
|
|
|
| |
As noticed by Tom Lane changing operation's number in commit
bb140506df605fab58f48926ee1db1f80bdafb59 causes on-disk format incompatibility.
Revert to previous numbering, that is reason to add special array to store
priorities of operation. Also it reverts order of tsquery to previous.
Author: Dmitry Ivanov
|
|
|
|
|
|
|
|
|
|
| |
Now indexes (but only B-tree for now) can contain "extra" column(s) which
doesn't participate in index structure, they are just stored in leaf
tuples. It allows to use index only scan by using single index instead
of two or more indexes.
Author: Anastasia Lubennikova with minor editorializing by me
Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
|
|
|
|
|
|
|
| |
Rename comparePos() to compareWordEntryPos() to prevent export of too
generic name.
Per gripe from Tom Lane.
|
|
|
|
|
| |
Per investigation failure skink buildfarm member and
RANDOMIZE_ALLOCATED_MEMORY help
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Patch introduces new text search operator (<-> or <DISTANCE>) into tsquery.
On-disk and binary in/out format of tsquery are backward compatible.
It has two side effect:
- change order for tsquery, so, users, who has a btree index over tsquery,
should reindex it
- less number of parenthesis in tsquery output, and tsquery becomes more
readable
Authors: Teodor Sigaev, Oleg Bartunov, Dmitry Ivanov
Reviewers: Alexander Korotkov, Artur Zakirov
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that pg_dump will properly dump out any ACL changes made to
functions which exist in pg_catalog, switch to using the GRANT system
to manage access to those functions.
This means removing 'if (!superuser()) ereport()' checks from the
functions themselves and then REVOKEing EXECUTE right from 'public' for
these functions in system_views.sql.
Reviews by Alexander Korotkov, Jose Luis Tallon
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that all of the infrastructure exists, add in the ability to
dump out the ACLs of the objects inside of pg_catalog or the ACLs
for objects which are members of extensions, but only if they have
been changed from their original values.
The original values are tracked in pg_init_privs. When pg_dump'ing
9.6-and-above databases, we will dump out the ACLs for all objects
in pg_catalog and the ACLs for all extension members, where the ACL
has been changed from the original value which was set during either
initdb or CREATE EXTENSION.
This should not change dumps against pre-9.6 databases.
Reviews by Alexander Korotkov, Jose Luis Tallon
|
|
|
|
|
|
|
|
| |
It inserts a new value into an jsonb array at arbitrary position or
a new key to jsonb object.
Author: Dmitry Dolgov
Reviewers: Petr Jelinek, Vitaly Burovoy, Andrew Dunstan
|
|
|
|
|
|
|
|
|
|
|
|
| |
This lets us use parallel aggregate for a variety of useful cases
that didn't work before, like sum(int8), sum(numeric), several
versions of avg(), and various other functions.
Add some regression tests, as well, testing the general sanity of
these and future catalog entries.
David Rowley, reviewed by Tomas Vondra, with a few further changes
by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When adjusting the estimate for the number of distinct values from a
rel in a grouped query to take into account the selectivity of the
rel's restrictions, use a formula that is less likely to produce
under-estimates.
The old formula simply multiplied the number of distinct values in the
rel by the restriction selectivity, which would be correct if the
restrictions were fully correlated with the grouping expressions, but
can produce significant under-estimates in cases where they are not
well correlated.
The new formula is based on the random selection probability, and so
assumes that the restrictions are not correlated with the grouping
expressions. This is guaranteed to produce larger estimates, and of
course risks over-estimating in cases where the restrictions are
correlated, but that has less severe consequences than
under-estimating, which might lead to a HashAgg that consumes an
excessive amount of memory.
This could possibly be improved upon in the future by identifying
correlated restrictions and using a hybrid of the old and new
formulae.
Author: Tomas Vondra, with some hacking be me
Reviewed-by: Mark Dilger, Alexander Korotkov, Dean Rasheed and Tom Lane
Discussion: http://www.postgresql.org/message-id/flat/56CD0381.5060502@2ndquadrant.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Formerly, the geometric I/O routines such as box_in and point_out relied
directly on strtod() and sprintf() for conversion of the float8 component
values of their data types. However, the behavior of those functions is
pretty platform-dependent, especially for edge-case values such as
infinities and NaNs. This was exposed by commit acdf2a8b372aec1d, which
added test cases involving boxes with infinity endpoints, and immediately
failed on Windows and AIX buildfarm members. We solved these problems
years ago in the main float8in and float8out functions, so let's fix it
by making the geometric types use that code instead of depending directly
on the platform-supplied functions.
To do this, refactor the float8in code so that it can be used to parse
just part of a string, and as a convenience make the guts of float8out
usable without going through DirectFunctionCall.
While at it, get rid of geo_ops.c's fairly shaky assumptions about the
maximum output string length for a double, by having it build results in
StringInfo buffers instead of fixed-length strings.
In passing, convert all the "invalid input syntax for type foo" messages
in this area of the code into "invalid input syntax for type %s" to reduce
the number of distinct translatable strings, per recent discussion.
We would have needed a fair number of the latter anyway for code-sharing
reasons, so we might as well just go whole hog.
Note: this patch is by no means intended to guarantee that the geometric
types uniformly behave sanely for infinity or NaN component values.
But any bugs we have in that line were there all along, they were just
harder to reach in a platform-independent way.
|
|
|
|
|
|
| |
My compiler doesn't like the lack of initialization of "flag", and
I think it's right: if there were zero keys we'd have an undefined
result. The AND of zero items is TRUE, so initialize to TRUE.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Patch implements quad-tree over boxes, naive approach of 2D quad tree will not
work for any non-point objects because splitting space on node is not
efficient. The idea of pathc is treating 2D boxes as 4D points, so,
object will not overlap (in 4D space).
The performance tests reveal that this technique especially beneficial
with too much overlapping objects, so called "spaghetti data".
Author: Alexander Lebedev with editorization by Emre Hasegeli and me
|
|
|
|
| |
Author: Alexander Lebedev
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
interval_mul() attempts to prevent its calculations from producing silly
results, but it forgot that zero times infinity yields NaN in IEEE
arithmetic. Hence, a case like '1 second'::interval * 'infinity'::float8
produced a NaN for the months product, which didn't trigger the range
check, resulting in bogus and possibly platform-dependent output.
This isn't terribly obvious to the naked eye because if you try that
exact case, you get "interval out of range" which is what you expect
--- but if you look closer, the error is coming from interval_out not
interval_mul. interval_mul has allowed a bogus value into the system.
Fix by adding isnan tests.
Noted while testing Vitaly Burovoy's fix for infinity input to
to_timestamp(). Given the lack of field complaints, I doubt this
is worth a back-patch.
|