| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
INSERT .. ON CONFLICT runs a pre-check of the possible conflicting
constraints before performing the actual speculative insertion. In case
the inserted tuple included TOASTed columns the ON CONFLICT condition
would be handled correctly in case the conflict was caught by the
pre-check, but if two transactions entered the speculative insertion
phase at the same time, one would have to re-try, and the code for
aborting a speculative insertion did not handle deleting the
speculatively inserted TOAST datums correctly.
TOAST deletion would fail with "ERROR: attempted to delete invisible
tuple" as we attempted to remove the TOAST tuples using
simple_heap_delete which reasoned that the given tuples should not be
visible to the command that wrote them.
This commit updates the heap_abort_speculative() function which aborts
the conflicting tuple to use itself, via toast_delete, for deleting
associated TOAST datums. Like before, the inserted toast rows are not
marked as being speculative.
This commit also adds a isolationtester spec test, exercising the
relevant code path. Unfortunately 9.5 cannot handle two waiting
sessions, and thus cannot execute this test.
Reported-By: Viren Negi, Oskari Saarenmaa
Author: Oskari Saarenmaa, edited a bit by me
Bug: #14150
Discussion: <20160519123338.12513.20271@wrigleys.postgresql.org>
Backpatch: 9.5, where ON CONFLICT was introduced
|
|
|
|
|
|
|
|
|
|
| |
The performance overhead of this can be significant on Windows, and most
people don't have the tools to view it anyway as Windows does not have
native support for process titles.
Discussion: <0A3221C70F24FB45833433255569204D1F5BE3E8@G01JPEXMBYT05>
Takayuki Tsunakawa
|
|
|
|
|
|
|
|
| |
I'm not sure which bozo thought it's a problem to use strtol() only
for its endptr result, but silence the warning using same method
used elsewhere.
Report: <f845d3a6-5328-3e2a-924f-f8e91aa2b6d2@2ndquadrant.com>
|
|
|
|
| |
From: Alexander Law <exclusion@gmail.com>
|
|
|
|
|
|
| |
Per discussion, set the default value of max_parallel_workers_per_gather
to 0 in 9.6 only. We'll leave it enabled in master so that it gets
more testing and in the hope that it can be enable by default in v10.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NUMERIC_MAX_PRECISION is a purely arbitrary constraint on the precision
and scale you can write in a numeric typmod. It might once have had
something to do with the allowed range of a typmod-less numeric value,
but at least since 9.1 we've allowed, and documented that we allowed,
any value that would physically fit in the numeric storage format;
which is something over 100000 decimal digits, not 1000.
Hence, get rid of numeric_in()'s use of NUMERIC_MAX_PRECISION as a limit
on the allowed range of the exponent in scientific-format input. That was
especially silly in view of the fact that you can enter larger numbers as
long as you don't use 'e' to do it. Just constrain the value enough to
avoid localized overflow, and let make_result be the final arbiter of what
is too large. Likewise adjust ecpg's equivalent of this code.
Also get rid of numeric_recv()'s use of NUMERIC_MAX_PRECISION to limit the
number of base-NBASE digits it would accept. That created a dump/restore
hazard for binary COPY without doing anything useful; the wire-format
limit on number of digits (65535) is about as tight as we would want.
In HEAD, also get rid of pg_size_bytes()'s unnecessary intimacy with what
the numeric range limit is. That code doesn't exist in the back branches.
Per gripe from Aravind Kumar. Back-patch to all supported branches,
since they all contain the documentation claim about allowed range of
NUMERIC (cf commit cabf5d84b).
Discussion: <2895.1471195721@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per discussion, we should provide such functions to replace the lost
ability to discover AM properties by inspecting pg_am (cf commit
65c5fcd35). The added functionality is also meant to displace any code
that was looking directly at pg_index.indoption, since we'd rather not
believe that the bit meanings in that field are part of any client API
contract.
As future-proofing, define the SQL API to not assume that properties that
are currently AM-wide or index-wide will remain so unless they logically
must be; instead, expose them only when inquiring about a specific index
or even specific index column. Also provide the ability for an index
AM to override the behavior.
In passing, document pg_am.amtype, overlooked in commit 473b93287.
Andrew Gierth, with kibitzing by me and others
Discussion: <87mvl5on7n.fsf@news-spur.riddles.org.uk>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several places in NUM_numpart_from_char(), which is called from the SQL
function to_number(text, text), could accidentally read one byte past
the end of the input buffer (which comes from the input text datum and
is not null-terminated).
1. One leading space character would be skipped, but there was no check
that the input was at least one byte long. This does not happen in
practice, but for defensiveness, add a check anyway.
2. Commit 4a3a1e2cf apparently accidentally doubled that code that skips
one space character (so that two spaces might be skipped), but there
was no overflow check before skipping the second byte. Fix by
removing that duplicate code.
3. A logic error would allow a one-byte over-read when looking for a
trailing sign (S) placeholder.
In each case, the extra byte cannot be read out directly, but looking at
it might cause a crash.
The third item was discovered by Piotr Stefaniak, the first two were
found and analyzed by Tom Lane and Peter Eisentraut.
|
|
|
|
| |
07d25a964 changed only one occurrence. Change the other one as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If ANALYZE found no repeated non-null entries in its sample, it set the
column's stadistinct value to -1.0, intending to indicate that the entries
are all distinct. But what this value actually means is that the number
of distinct values is 100% of the table's rowcount, and thus it was
overestimating the number of distinct values by however many nulls there
are. This could lead to very poor selectivity estimates, as for example
in a recent report from Andreas Joseph Krogh. We should discount the
stadistinct value by whatever we've estimated the nulls fraction to be.
(That is what will happen if we choose to use a negative stadistinct for
a column that does have repeated entries, so this code path was just
inconsistent.)
In addition to fixing the stadistinct entries stored by several different
ANALYZE code paths, adjust the logic where get_variable_numdistinct()
forces an "all distinct" estimate on the basis of finding a relevant unique
index. Unique indexes don't reject nulls, so there's no reason to assume
that the null fraction doesn't apply.
Back-patch to all supported branches. Back-patching is a bit of a judgment
call, but this problem seems to affect only a few users (else we'd have
identified it long ago), and it's bad enough when it does happen that
destabilizing plan choices in a worse direction seems unlikely.
Patch by me, with documentation wording suggested by Dean Rasheed
Report: <VisenaEmail.26.df42f82acae38a58.156463942b8@tc7-visena>
Discussion: <16143.1470350371@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
| |
Oversight in commit 976b24fb4.
Andreas Seltenreich
Report: <87y448l3ag.fsf@credativ.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Discussion of commit 3e2f3c2e4 exposed a problem that is of longer
standing: since we don't detoast data while sticking it into a portal's
holdStore for PORTAL_ONE_RETURNING and PORTAL_UTIL_SELECT queries, and we
release the query's snapshot as soon as we're done loading the holdStore,
later readout of the holdStore can do TOAST fetches against data that can
no longer be seen by any of the session's live snapshots. This means that
a concurrent VACUUM could remove the TOAST data before we can fetch it.
Commit 3e2f3c2e4 exposed the problem by showing that sometimes we had *no*
live snapshots while fetching TOAST data, but we'd be at risk anyway.
I believe this code was all right when written, because our management of a
session's exposed xmin was such that the TOAST references were safe until
end of transaction. But that's no longer true now that we can advance or
clear our PGXACT.xmin intra-transaction.
To fix, copy the query's snapshot during FillPortalStore() and save it in
the Portal; release it only when the portal is dropped. This essentially
implements a policy that we must hold a relevant snapshot whenever we
access potentially-toasted data. We had already come to that conclusion
in other places, cf commits 08e261cbc94ce9a7 and ec543db77b6b72f2.
I'd have liked to add a regression test case for this, but I didn't see
a way to make one that's not unreasonably bloated; it seems to require
returning a toasted value to the client, and those will be big.
In passing, improve PortalRunUtility() so that it positively verifies
that its ending PopActiveSnapshot() call will pop the expected snapshot,
removing a rather shaky assumption about which utility commands might
do their own PopActiveSnapshot(). There's no known bug here, but now
that we're actively referencing the snapshot it's almost free to make
this code a bit more bulletproof.
We might want to consider back-patching something like this into older
branches, but it would be prudent to let it prove itself more in HEAD
beforehand.
Discussion: <87vazemeda.fsf@credativ.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sole caller expects NULL to be returned in such a case, so make
it so and document it.
Per reports from Andreas Seltenreich and Regina Obe. This doesn't
really fix their problem, as now their RETURNING queries will say
"ERROR: no known snapshots", but in any case this function should
not dump core in a reasonably-foreseeable situation.
Report: <87vazemeda.fsf@credativ.de>
Report: <20160807051854.1427.32414@wrigleys.postgresql.org>
|
|
|
|
|
|
|
| |
This is required for the result to be a legal tsvector value.
Noted while fooling with Andreas Seltenreich's ts_delete() crash.
Discussion: <87invhoj6e.fsf@credativ.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Such cases either failed an Assert, or produced a corrupt tsvector in
non-Assert builds, as reported by Andreas Seltenreich. The reason is
that tsvector_delete_by_indices() just assumed that its input array had
no duplicates. Fix by explicitly de-duping.
In passing, improve some comments, and fix a number of tests for null
values to use ERRCODE_NULL_VALUE_NOT_ALLOWED not
ERRCODE_INVALID_PARAMETER_VALUE.
Discussion: <87invhoj6e.fsf@credativ.de>
|
|
|
|
|
| |
Messed up by recent commits --- this is annoying me while trying to fix
some bugs here.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we tested for MVCC snapshots to see whether they were too
old, but not TOAST snapshots, which can lead to complaints about missing
TOAST chunks if those chunks are subject to early pruning. Ideally,
the threshold lsn and timestamp for a TOAST snapshot would be that of
the corresponding MVCC snapshot, but since we have no way of deciding
which MVCC snapshot was used to fetch the TOAST pointer, use the oldest
active or registered snapshot instead.
Reported by Andres Freund, who also sketched out what the fix should
look like. Patch by me, reviewed by Amit Kapila.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if an INSERT with multiple rows of VALUES had indirection
(array subscripting or field selection) in its target-columns list, the
parser handled that by applying transformAssignedExpr() to each element
of each VALUES row independently. This led to having ArrayRef assignment
nodes or FieldStore nodes in each row of the VALUES RTE. That works for
simple cases, but in bug #14265 Nuri Boardman points out that it fails
if there are multiple assignments to elements/fields of the same target
column. For such cases to work, rewriteTargetListIU() has to nest the
ArrayRefs or FieldStores together to produce a single expression to be
assigned to the column. But it failed to find them in the top-level
targetlist and issued an error about "multiple assignments to same column".
We could possibly fix this by teaching the rewriter to apply
rewriteTargetListIU to each VALUES row separately, but that would be messy
(it would change the output rowtype of the VALUES RTE, for example) and
inefficient. Instead, let's fix the parser so that the VALUES RTE outputs
are just the user-specified values, cast to the right type if necessary,
and then the ArrayRefs or FieldStores are applied in the top-level
targetlist to Vars representing the RTE's outputs. This is the same
parsetree representation already used for similar cases with INSERT/SELECT
syntax, so it allows simplifications in ruleutils.c, which no longer needs
to treat INSERT-from-multiple-VALUES as its own special case.
This implementation works by applying transformAssignedExpr to the VALUES
entries as before, and then stripping off any ArrayRefs or FieldStores it
adds. With lots of VALUES rows it would be noticeably more efficient to
not add those nodes in the first place. But that's just an optimization
not a bug fix, and there doesn't seem to be any good way to do it without
significant refactoring. (A non-invasive answer would be to apply
transformAssignedExpr + stripping to just the first VALUES row, and then
just forcibly cast remaining rows to the same data types exposed in the
first row. But this way would lead to different, not-INSERT-specific
errors being reported in casting failure cases, so it doesn't seem very
nice.) So leave that for later; this patch at least isn't making the
per-row parsing work worse, and it does make the finished parsetree
smaller, saving rewriter and planner work.
Catversion bump because stored rules containing such INSERTs would need
to change. Because of that, no back-patch, even though this is a very
long-standing bug.
Report: <20160727005725.7438.26021@wrigleys.postgresql.org>
Discussion: <9578.1469645245@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We don't want postmaster child processes to contain a copy of the
postmaster's PostmasterContext. That would be a waste of memory at least,
and at worst a security issue, since there are copies of the semi-sensitive
pg_hba and pg_ident data in there. All other child process types delete
the PostmasterContext after forking, but the original coding of the
background worker patch (commit da07a1e85) did not do so. It appears
that the only reason for that was to avoid copying the bgworker's
MyBgworkerEntry out of that context; but the couple of additional
statements needed to do so are hardly good justification for it. Hence,
copy that data and then clear the context as other child processes do.
Because this patch changes the memory context in which a bgworker function
gains control, back-patching it would be a bit risky, so we won't fix this
in back branches. The "security" complaint is pretty thin anyway for
generic bgworkers; only with the introduction of parallel query is there
any question of running untrusted code in a bgworker process.
Discussion: <14111.1470082717@sss.pgh.pa.us>
|
|
|
|
| |
Author: Amit Langote
|
|
|
|
|
| |
Setting it to 0 is probably not useful in practice, but it allows
testing of situations without available background worker slots.
|
|
|
|
| |
Michael Paquier
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits 4452000f3 et al established semantics for NullTest.argisrow that
are a bit different from its initial conception: rather than being merely
a cache of whether we've determined the input to have composite type,
the flag now has the further meaning that we should apply field-by-field
testing as per the standard's definition of IS [NOT] NULL. If argisrow
is false and yet the input has composite type, the construct instead has
the semantics of IS [NOT] DISTINCT FROM NULL. Update the comments in
primnodes.h to clarify this, and fix ruleutils.c and deparse.c to print
such cases correctly. In the case of ruleutils.c, this merely results in
cosmetic changes in EXPLAIN output, since the case can't currently arise
in stored rules. However, it represents a live bug for deparse.c, which
would formerly have sent a remote query that had semantics different
from the local behavior. (From the user's standpoint, this means that
testing a remote nested-composite column for null-ness could have had
unexpected recursive behavior much like that fixed in 4452000f3.)
In a related but somewhat independent fix, make plancat.c set argisrow
to false in all NullTest expressions constructed to represent "attnotnull"
constructs. Since attnotnull is actually enforced as a simple null-value
check, this is a more accurate representation of the semantics; we were
previously overpromising what it meant for composite columns, which might
possibly lead to incorrect planner optimizations. (It seems that what the
SQL spec expects a NOT NULL constraint to mean is an IS NOT NULL test, so
arguably we are violating the spec and should fix attnotnull to do the
other thing. If we ever do, this part should get reverted.)
Back-patch, same as the previous commit.
Discussion: <10682.1469566308@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, some functions returned various fixed strings and others
failed with a cache lookup error. Per discussion, standardize on
returning NULL. Although user-exposed "cache lookup failed" error
messages might normally qualify for bug-fix treatment, no back-patch;
the risk of breaking user code which is accustomed to the current
behavior seems too high.
Michael Paquier
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
runtime.sgml used to contain a table of estimated shared memory consumption
rates for max_connections and some other GUCs. Commit 390bfc643 removed
that on the well-founded grounds that (a) we weren't maintaining the
entries well and (b) it no longer mattered so much once we got out from
under SysV shmem limits. But it missed that there were even-more-obsolete
versions of some of those numbers in comments in postgresql.conf.sample.
Remove those too. Back-patch to 9.3 where the aforesaid commit went in.
|
|
|
|
| |
Peter Geoghegan
|
|
|
|
|
|
|
|
|
|
| |
The Assert() here seems unreasonably optimistic. Andreas Seltenreich
found that it could fail with NaNs in the input geometries, and it
seems likely to me that it might fail in corner cases due to roundoff
error, even for ordinary input values. As a band-aid, make the function
return SQL NULL instead of crashing.
Report: <87d1md1xji.fsf@credativ.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We must not push down a foreign join when the foreign tables involved
should be accessed under different user mappings. Previously we tried
to enforce that rule literally during planning, but that meant that the
resulting plans were dependent on the current contents of the
pg_user_mapping catalog, and we had to blow away all cached plans
containing any remote join when anything at all changed in pg_user_mapping.
This could have been improved somewhat, but the fact that a syscache inval
callback has very limited info about what changed made it hard to do better
within that design. Instead, let's change the planner to not consider user
mappings per se, but to allow a foreign join if both RTEs have the same
checkAsUser value. If they do, then they necessarily will use the same
user mapping at runtime, and we don't need to know specifically which one
that is. Post-plan-time changes in pg_user_mapping no longer require any
plan invalidation.
This rule does give up some optimization ability, to wit where two foreign
table references come from views with different owners or one's from a view
and one's directly in the query, but nonetheless the same user mapping
would have applied. We'll sacrifice the first case, but to not regress
more than we have to in the second case, allow a foreign join involving
both zero and nonzero checkAsUser values if the nonzero one is the same as
the prevailing effective userID. In that case, mark the plan as only
runnable by that userID.
The plancache code already had a notion of plans being userID-specific,
in order to support RLS. It was a little confused though, in particular
lacking clarity of thought as to whether it was the rewritten query or just
the finished plan that's dependent on the userID. Rearrange that code so
that it's clearer what depends on which, and so that the same logic applies
to both RLS-injected role dependency and foreign-join-injected role
dependency.
Note that this patch doesn't remove the other issue mentioned in the
original complaint, which is that while we'll reliably stop using a foreign
join if it's disallowed in a new context, we might fail to start using a
foreign join if it's now allowed, but we previously created a generic
cached plan that didn't use one. It was agreed that the chance of winning
that way was not high enough to justify the much larger number of plan
invalidations that would have to occur if we tried to cause it to happen.
In passing, clean up randomly-varying spelling of EXPLAIN commands in
postgres_fdw.sql, and fix a COSTS ON example that had been allowed to
leak into the committed tests.
This reverts most of commits fbe5a3fb7 and 5d4171d1c, which were the
previous attempt at ensuring we wouldn't push down foreign joins that
span permissions contexts.
Etsuro Fujita and Tom Lane
Discussion: <d49c1e5b-f059-20f4-c132-e9752ee0113e@lab.ntt.co.jp>
|
|
|
|
|
|
|
| |
Digging around bug #14245 I found that commit
6734a1cacd44f5b731933cbc93182b135b167d0c missed that NOT operation is
right associative in opposite to all other. This miss is resposible for
tsquery parser fail on sequence of NOT operations
|
|
|
|
|
|
|
|
|
| |
During normalization of tsquery tree it tries to simplify nested NOT
operations but there it's obvioulsy missed that subsequent node could be
a leaf node (value node)
Bug #14245: Segfault on weird to_tsquery
Reported by David Kellum.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GiST index build could go into an infinite loop when presented with boxes
(or points, circles or polygons) containing NaN component values. This
happened essentially because the code assumed that x == x is true for any
"double" value x; but it's not true for NaNs. The looping behavior was not
the only problem though: we also attempted to sort the items using simple
double comparisons. Since NaNs violate the trichotomy law, qsort could
(in principle at least) get arbitrarily confused and mess up the sorting of
ordinary values as well as NaNs. And we based splitting choices on box size
calculations that could produce NaNs, again resulting in undesirable
behavior.
To fix, replace all comparisons of doubles in this logic with
float8_cmp_internal, which is NaN-aware and is careful to sort NaNs
consistently, higher than any non-NaN. Also rearrange the box size
calculation to not produce NaNs; instead it should produce an infinity
for a box with NaN on one side and not-NaN on the other.
I don't by any means claim that this solves all problems with NaNs in
geometric values, but it should at least make GiST index insertion work
reliably with such data. It's likely that the index search side of things
still needs some work, and probably regular geometric operations too.
But with this patch we're laying down a convention for how such cases
ought to behave.
Per bug #14238 from Guang-Dih Lei. Back-patch to 9.2; the code used before
commit 7f3bd86843e5aad8 is quite different and doesn't lock up on my simple
test case, nor on the submitter's dataset.
Report: <20160708151747.1426.60150@wrigleys.postgresql.org>
Discussion: <28685.1468246504@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
| |
Otherwise, when we abandon incremental memory accounting and use
batch allocation for the final merge pass, we might crash. This
has been broken since 0011c0091e886b874e485a46ff2c94222ffbf550.
Peter Geoghegan, tested by Noah Misch
|
|
|
|
| |
Peter Geoghegan
|
|
|
|
|
|
|
|
|
|
|
|
| |
temp_file_limit is a per-process limit, not a per-session limit across
all cooperating parallel processes; change wording accordingly, per a
suggestion from Tom Lane.
Also, document under max_parallel_workers_per_gather the fact that each
process involved in a parallel query may use as many resources as a
separate session. Caveat emptor.
Per a complaint from Peter Geoghegan.
|
| |
|
|
|
|
| |
Author: Masahiko Sawada
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were merely Assert'ing that the Var matched the RTE it's supposedly
from. But if the user passes incorrect information to pg_get_expr(),
the RTE might in fact not match; this led either to Assert failures
or core dumps, as reported by Chris Hanks in bug #14220. To fix, just
convert the Asserts to test-and-elog. Adjust an existing test-and-elog
elsewhere in the same function to be consistent in wording.
(If we really felt these were user-facing errors, we might promote them to
ereport's; but I can't convince myself that they're worth translating.)
Back-patch to 9.3; the problematic code doesn't exist before that, and
a quick check says that 9.2 doesn't crash on such cases.
Michael Paquier and Thomas Munro
Report: <20160629224349.1407.32667@wrigleys.postgresql.org>
|
|
|
|
|
|
|
|
|
| |
If serialized_snapshot->subxcnt > 0 and serialized_snapshot->xcnt == 0,
the old coding would do the wrong thing and crash. This can happen
on standby servers.
Report by Andreas Seltenreich. Patch by Thomas Munro, reviewed by
Amit Kapila and tested by Andreas Seltenreich.
|
|
|
|
|
|
|
|
|
|
| |
<-> operator now have higher predecence than & (AND) operator. This change
was motivated by unexpected difference of similar queries:
'a & b <-> c'::tsquery and 'b <-> c & a'. Before first query means
(a & b) <-> c and second one - '(b <-> c) & a', now phrase operator evaluates
first.
Per suggestion from Tom Lane 32260.1465402409@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
If there is no positional information of lexemes then phrase operator will not
fallback to AND operator. This change makes needing to modify TS_execute()
interface, because somewhere (in indexes, for example) positional information
is unaccesible and in this cases we need to force fallback to AND.
Per discussion c19fcfec308e6ccd952cdde9e648b505@mail.gmail.com
|
|
|
|
|
|
|
| |
Phrase operator now requires exact distance betweens lexems instead of
less-or-equal.
Per discussion c19fcfec308e6ccd952cdde9e648b505@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original coding had three separate booleans representing partial
aggregation behavior, which was confusing, unreadable, and error-prone,
not least because the booleans weren't always listed in the same order.
It was also inadequate for the allegedly-desirable future extension to
support intermediate partial aggregation, because we'd need separate
markers for serialization and deserialization in such a case.
Merge these bools into an enum "AggSplit" to provide symbolic names for
the supported operating modes (and document what those are). By assigning
the values of the enum constants carefully, we can treat AggSplit values
as options bitmasks so that tests of what to do aren't noticeably more
expensive than before.
While at it, get rid of Aggref.aggoutputtype. That's not needed since
commit 59a3795c2 got rid of setrefs.c's special-purpose Aggref comparison
code, and it likewise seemed more confusing than helpful.
Assorted comment cleanup as well (there's still more that I want to do
in that line).
catversion bump for change in Aggref node contents. Should be the last
one for partial-aggregation changes.
Discussion: <29309.1466699160@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After pg_upgrade, it is possible that some tuples' Xmax have multixacts
corresponding to the old installation; such multixacts cannot have
running members anymore. In many code sites we already know not to read
them and clobber them silently, but at least when VACUUM tries to freeze
a multixact or determine whether one needs freezing, there's an attempt
to resolve it to its member transactions by calling GetMultiXactIdMembers,
and if the multixact value is "in the future" with regards to the
current valid multixact range, an error like this is raised:
ERROR: MultiXactId 123 has not been created yet -- apparent wraparound
and vacuuming fails. Per discussion with Andrew Gierth, it is completely
bogus to try to resolve multixacts coming from before a pg_upgrade,
regardless of where they stand with regards to the current valid
multixact range.
It's possible to get from under this problem by doing SELECT FOR UPDATE
of the problem tuples, but if tables are large, this is slow and
tedious, so a more thorough solution is desirable.
To fix, we realize that multixacts in xmax created in 9.2 and previous
have a specific bit pattern that is never used in 9.3 and later (we
already knew this, per comments and infomask tests sprinkled in various
places, but we weren't leveraging this knowledge appropriately).
Whenever the infomask of the tuple matches that bit pattern, we just
ignore the multixact completely as if Xmax wasn't set; or, in the case
of tuple freezing, we act as if an unwanted value is set and clobber it
without decoding. This guarantees that no errors will be raised, and
that the values will be progressively removed until all tables are
clean. Most callers of GetMultiXactIdMembers are patched to recognize
directly that the value is a removable "empty" multixact and avoid
calling GetMultiXactIdMembers altogether.
To avoid changing the signature of GetMultiXactIdMembers() in back
branches, we keep the "allow_old" boolean flag but rename it to
"from_pgupgrade"; if the flag is true, we always return an empty set
instead of looking up the multixact. (I suppose we could remove the
argument in the master branch, but I chose not to do so in this commit).
This was broken all along, but the error-facing message appeared first
because of commit 8e9a16ab8f7f and was partially fixed in a25c2b7c4db3.
This fix, backpatched all the way back to 9.3, goes approximately in the
same direction as a25c2b7c4db3 but should cover all cases.
Bug analysis by Andrew Gierth and Álvaro Herrera.
A number of public reports match this bug:
https://www.postgresql.org/message-id/20140330040029.GY4582@tamriel.snowman.net
https://www.postgresql.org/message-id/538F3D70.6080902@publicrelay.com
https://www.postgresql.org/message-id/556439CF.7070109@pscs.co.uk
https://www.postgresql.org/message-id/SG2PR06MB0760098A111C88E31BD4D96FB3540@SG2PR06MB0760.apcprd06.prod.outlook.com
https://www.postgresql.org/message-id/20160615203829.5798.4594@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A deserialize function's result is short-lived data during partial
aggregation, since we're just going to pass it to the combine function
and then it's of no use anymore. However, the built-in deserialize
functions allocated their results in the aggregate state context,
resulting in a query-lifespan memory leak. It's probably not possible for
this to amount to anything much at present, since the number of leaked
results would only be the number of worker processes. But it might become
a problem in future. To fix, don't use the same convenience subroutine for
setting up results that the aggregate transition functions use.
David Rowley
Report: <10050.1466637736@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original specification for this called for the deserialization function
to have signature "deserialize(serialtype) returns transtype", which is a
security violation if transtype is INTERNAL (which it always would be in
practice) and serialtype is not (which ditto). The patch blithely overrode
the opr_sanity check for that, which was sloppy-enough work in itself,
but the indisputable reason this cannot be allowed to stand is that CREATE
FUNCTION will reject such a signature and thus it'd be impossible for
extensions to create parallelizable aggregates.
The minimum fix to make the signature type-safe is to add a second, dummy
argument of type INTERNAL. But to lock it down a bit more and make misuse
of INTERNAL-accepting functions less likely, let's get rid of the ability
to specify a "serialtype" for an aggregate and just say that the only
useful serialtype is BYTEA --- which, in practice, is the only interesting
value anyway, due to the usefulness of the send/recv infrastructure for
this purpose. That means we only have to allow "serialize(internal)
returns bytea" and "deserialize(bytea, internal) returns internal" as
the signatures for these support functions.
In passing fix bogus signature of int4_avg_combine, which I found thanks
to adding an opr_sanity check on combinefunc signatures.
catversion bump due to removing pg_aggregate.aggserialtype and adjusting
signatures of assorted built-in functions.
David Rowley and Tom Lane
Discussion: <27247.1466185504@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch provides a new implementation of the logic added by commit
137805f89 and later removed by 77ba61080. It differs from the original
primarily in expending much less effort per joinrel in large queries,
which it accomplishes by doing most of the matching work once per query not
once per joinrel. Hopefully, it's also less buggy and better commented.
The never-documented enable_fkey_estimates GUC remains gone.
There remains work to be done to make the selectivity estimates account
for nulls in FK referencing columns; but that was true of the original
patch as well. We may be able to address this point later in beta.
In the meantime, any error should be in the direction of overestimating
rather than underestimating joinrel sizes, which seems like the direction
we want to err in.
Tomas Vondra and Tom Lane
Discussion: <31041.1465069446@sss.pgh.pa.us>
|