| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All our current in core relation options of type string (not many,
admittedly) behave in reality like enums. But after seeing an
implementation for enum reloptions, it's clear that strings are messier,
so introduce the new reloption type. Switch all string options to be
enums instead.
Fortunately we have a recently introduced test module for reloptions, so
we don't lose coverage of string reloptions, which may still be used by
third-party modules.
Authors: Nikolay Shaplov, Álvaro Herrera
Reviewed-by: Nikita Glukhov, Aleksandr Parfenov
Discussion: https://postgr.es/m/43332102.S2V5pIjXRx@x200m
|
|
|
|
|
|
|
|
|
|
|
| |
Relation options can define a lock mode other than AccessExclusiveMode
since 47167b7, but modules defining custom relation options did not
really have a way to enforce that. Correct that by extending the
current API set so as modules can define a custom lock mode.
Author: Michael Paquier
Reviewed-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20190920013831.GD1844@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In-core relation options can use a custom lock mode since 47167b7, that
has lowered the lock available for some autovacuum parameters. However
it forgot to consider custom relation options. This causes failures
with ALTER TABLE SET when changing a custom relation option, as its lock
is not defined. The existing APIs to define a custom reloption does not
allow to define a custom lock mode, so enforce its initialization to
AccessExclusiveMode which should be safe enough in all cases. An
upcoming patch will extend the existing APIs to allow a custom lock mode
to be defined.
The problem can be reproduced with bloom indexes, so add a test there.
Reported-by: Nikolay Sharplov
Analyzed-by: Thomas Munro, Michael Paquier
Author: Michael Paquier
Reviewed-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20190920013831.GD1844@paquier.xyz
Backpatch-through: 9.6
|
|
|
|
|
|
|
|
|
| |
Our item contains only so->numberOfNonNullOrderBys of distances. Reflect that
in the loop upper bound.
Discussion: https://postgr.es/m/53536807-784c-e029-6e92-6da802ab8d60%40postgrespro.ru
Author: Nikita Glukhov
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The state-tracking of WAL reading in various places was pretty messy,
mostly because the ancient physical-replication WAL reading code wasn't
using the XLogReader abstraction. This led to some untidy code. Make
it prettier by creating two additional supporting structs,
WALSegmentContext and WALOpenSegment which keep track of WAL-reading
state. This makes code cleaner, as well as supports more future
cleanup.
Author: Antonin Houska
Reviewed-by: Álvaro Herrera and (older versions) Robert Haas
Discussion: https://postgr.es/m/14984.1554998742@spoje.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix an oversight in commit 7266d0997: as it stood, the code failed
when a function-in-FROM returns composite and can be simplified
to a composite constant.
For the moment, just test for composite result and abandon pullup
if we see one. To make it actually work, we'd have to decompose
the composite constant into per-column constants; which is surely
do-able, but I'm not convinced it's worth the code space.
Per report from Raúl Marín Rodríguez.
Discussion: https://postgr.es/m/CAM6_UM4isP+buRA5sWodO_MUEgutms-KDfnkwGmryc5DGj9XuQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a relation is truncated, shared_buffers needs to be scanned
so that any buffers for the relation forks are invalidated in it.
Previously, shared_buffers was scanned for each relation forks, i.e.,
MAIN, FSM and VM, when VACUUM truncated off any empty pages
at the end of relation or TRUNCATE truncated the relation in place.
Since shared_buffers needed to be scanned multiple times,
it could take a long time to finish those commands especially
when shared_buffers was large.
This commit changes the logic so that shared_buffers is scanned only
one time for those three relation forks.
Author: Kirk Jamison
Reviewed-by: Masahiko Sawada, Thomas Munro, Alvaro Herrera, Takayuki Tsunakawa and Fujii Masao
Discussion: https://postgr.es/m/D09B13F772D2274BB348A310EE3027C64E2067@g01jpexmbkw24
|
|
|
|
|
|
|
| |
This appears to have been an oversight in b8d7f053c5c2. As it's
effectively harmless, though confusing, only fix in master.
Author: Andres Freund
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the bitstring length is not a multiple of 8, we'd shift the
rightmost bits into the pad space, which must be zeroes --- bit_cmp,
for one, depends on that. This'd lead to the result failing to
compare equal to what it should compare equal to, as reported in
bug #16013 from Daryl Waycott.
This is, if memory serves, not the first such bug in the bitstring
functions. In hopes of making it the last one, do a bit more work
than minimally necessary to fix the bug:
* Add assertion checks to bit_out() and varbit_out() to complain if
they are given incorrectly-padded input. This will improve the
odds that manual testing of any new patch finds problems.
* Encapsulate the padding-related logic in macros to make it
easier to use.
Also, remove unnecessary padding logic from bit_or() and bitxor().
Somebody had already noted that we need not re-pad the result of
bit_and() since the inputs are required to be the same length,
but failed to extrapolate that to the other two.
Also, move a comment block that once was near the head of varbit.c
(but people kept putting other stuff in front of it), to put it in
the header block.
Note for the release notes: if anyone has inconsistent data as a
result of saving the output of bitshiftright() in a table, it's
possible to fix it with something like
UPDATE mytab SET bitcol = ~(~bitcol) WHERE bitcol != ~(~bitcol);
This has been broken since day one, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/16013-c2765b6996aacae9@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
| |
The code used the destination slot's natts where it intended to
use the source slot's natts. Adding an Assert shows that there
is no case in "make check-world" where these counts are different,
so maybe this is a harmless bug, but it's still a bug.
Takayuki Tsunakawa
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1FD34C0E@G01JPEXMBYT05
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the responsibility for advancing the NOTIFY queue tail pointer
from the listener(s) to the notification sender, and only have the
sender do it once every few queue pages, rather than after every batch
of notifications as at present. This reduces the number of times we
execute asyncQueueAdvanceTail, and reduces contention when there are
multiple listeners (since that function requires exclusive lock).
This change relies on the observation that we don't really need the tail
pointer to be exactly up-to-date. It's certainly not necessary to
attempt to release disk space more often than once per SLRU segment.
The only other usage of the tail pointer is that an incoming listener,
if it's the only listener in its database, will need to scan the queue
forward from the tail; but that's surely a less performance-critical
path than routine sending and receiving of notifies. We compromise by
advancing the tail pointer after every 4 pages of output, so that it
shouldn't get more than a few pages behind.
Also, when sending signals to other backends after adding notify
message(s) to the queue, recognize that only backends in our own
database are going to care about those messages, so only such
backends really need to be awakened promptly. Backends in other
databases should get kicked if they're well behind on reading the
queue, else they'll hold back the global tail pointer; but wakening
them for every single message is pointless. This change can
substantially reduce signal traffic if listeners are spread among
many databases. It won't help for the common case of only a single
active database, but the extra check costs very little.
Martijn van Oosterhout, with some adjustments by me
Discussion: https://postgr.es/m/CADWG95vtRBFDdrx1JdT1_9nhOFw48KaeTev6F_LtDQAFVpSPhA@mail.gmail.com
Discussion: https://postgr.es/m/CADWG95uFj8rLM52Er80JnhRsTbb_AqPP1ANHS8XQRGbqLrU+jA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we introduced the idea of leakproof functions, texteq and textne
were marked leakproof but their sibling text comparison functions were
not. This inconsistency seemed justified because texteq/textne just
relied on memcmp() and so could easily be seen to be leakproof, while
the other comparison functions are far more complex and indeed can
throw input-dependent errors.
However, that argument crashed and burned with the addition of
nondeterministic collations, because now texteq/textne may invoke
the exact same varstr_cmp() infrastructure as the rest. It makes no
sense whatever to give them different leakproofness markings.
After a certain amount of angst we've concluded that it's all right
to consider varstr_cmp() to be leakproof, mostly because the other
choice would be disastrous for performance of many queries where
leakproofness matters. The input-dependent errors should only be
reachable for corrupt input data, or so we hope anyway; certainly,
if they are reachable in practice, we've got problems with requirements
as basic as maintaining a btree index on a text column.
Hence, run around to all the SQL functions that derive from varstr_cmp()
and mark them leakproof. This should result in a useful gain in
flexibility/performance for queries in which non-leakproofness degrades
the efficiency of the query plan.
Back-patch to v12 where nondeterministic collations were added.
While this isn't an essential bug fix given the determination
that varstr_cmp() is leakproof, we might as well apply it now that
we've been forced into a post-beta4 catversion bump.
Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
text_pattern_ops and its siblings can't be used with nondeterministic
collations, because they use the text_eq operator which will not behave
as bitwise equality if applied with a nondeterministic collation. The
initial implementation of that restriction was to insert a run-time test
in the related comparison functions, but that is inefficient, may throw
misleading errors, and will throw errors in some cases that would work.
It seems sufficient to just prevent the combination during CREATE INDEX,
so do that instead.
Lacking any better way to identify the opclasses involved, we need to
hard-wire tests for them, which requires hand-assigned values for their
OIDs, which forces a catversion bump because they previously had OIDs
that would be assigned automatically. That's slightly annoying in the
v12 branch, but fortunately we're not at rc1 yet, so just do it.
Back-patch to v12 where nondeterministic collations were added.
In passing, run make reformat-dat-files, which found some unrelated
whitespace issues (slightly different ones in HEAD and v12).
Peter Eisentraut, with small corrections by me
Discussion: https://postgr.es/m/22566.1568675619@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new function stashes its output value in a JsonbValue that can be
passed in by the caller, which enables some of them to pass
stack-allocated structs -- saving palloc cycles. It also allows some
callers that know they are handling a jsonb object to use this new jsonb
object-specific API, instead of going through generic container
findJsonbValueFromContainer.
Author: Nikita Glukhov
Discussion: https://postgr.es/m/7c417f90-f95f-247e-ba63-d95e39c0ad14@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
| |
Instead of creating an iterator object at each step down the JSONB
object/array, we can just just examine its object/array flags, which is
faster. Also, use the recently introduced JsonbValueAsText instead of
open-coding the same thing, for code simplicity.
Author: Nikita Glukhov
Discussion: https://postgr.es/m/7c417f90-f95f-247e-ba63-d95e39c0ad14@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
jsonb_object_field_text and jsonb_array_element_text both contained
identical copies of this code, so extract that into new routine
JsonbValueAsText. This can also be used in other places, to measurable
performance benefit: the jsonb_each() and jsonb_array_elements()
functions can use it for outputting text forms instead of their less
efficient current implementation (because we no longer need to build
intermediate a jsonb representation of each value).
Author: Nikita Glukhov
Discussion: https://postgr.es/m/7c417f90-f95f-247e-ba63-d95e39c0ad14@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although the SQL/JSON tech report makes reference to ECMAScript which
allows both single- and double-quoted strings, all the rest of the
report speaks only of double-quoted string literals in jsonpaths.
That's more compatible with JSON itself; moreover single-quoted strings
are hard to use inside a jsonpath that is itself a single-quoted SQL
literal. So guess that the intent is to allow only double-quoted
literals, and remove lexer support for single-quoted literals.
It'll be less painful to add this again later if we're wrong, than to
remove a shipped feature.
Also, adjust the lexer so that unrecognized backslash sequences are
treated as just meaning the escaped character, not as errors. This
change has much better support in the standards, as JSON, JavaScript
and ECMAScript all make it plain that that's what's supposed to
happen.
Back-patch to v12.
Discussion: https://postgr.es/m/CAPpHfdvDci4iqNF9fhRkTqhe-5_8HmzeLt56drH%2B_Rv2rNRqfg@mail.gmail.com
|
|
|
|
|
|
|
|
| |
I (Álvaro) broke that in commit 6212276e4343 -- forgot to set the
necessary flag. Repair.
Author: Amit Langote
Discussion: https://postgr.es/m/CA+HiwqEaM2tV5awKhP1vSbgjQe_uXVU15Oi4sTgwgempwMiT8g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
6cae9d2c10 has added an error in freeing old values in
index_store_float8_orderby_distances() function. It looks for old value in
scan->xs_orderbynulls[i] after setting a new value there.
This commit fixes that. Also it removes short-circuit in handling
distances == NULL situation. Now distances == NULL will be treated the same
way as array with all null distances. That is, previous values will be freed
if any.
Reported-by: Tom Lane, Nikita Glukhov
Discussion: https://postgr.es/m/CAPpHfdu2wcoAVAm3Ek66rP%3Duo_C-D84%2B%2Buf1VEcbyi_caBXWCA%40mail.gmail.com
Discussion: https://postgr.es/m/426580d3-a668-b9d1-7b8e-f74d1a6524e0%40postgrespro.ru
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit improves subject in two ways:
* It removes ugliness of 02f90879e7, which stores distance values and null
flags in two separate arrays after GISTSearchItem struct. Instead we pack
both distance value and null flag in IndexOrderByDistance struct. Alignment
overhead should be negligible, because we typically deal with at most few
"col op const" expressions in ORDER BY clause.
* It fixes handling of "col op NULL" expression in KNN-SP-GiST. Now, these
expression are not passed to support functions, which can't deal with them.
Instead, NULL result is implicitly assumed. It future we may decide to
teach support functions to deal with NULL arguments, but current solution is
bugfix suitable for backpatch.
Reported-by: Nikita Glukhov
Discussion: https://postgr.es/m/826f57ee-afc7-8977-c44c-6111d18b02ec%40postgrespro.ru
Author: Nikita Glukhov
Reviewed-by: Alexander Korotkov
Backpatch-through: 9.4
|
|
|
|
|
|
|
|
|
|
| |
Make the error messages around GSSAPI encryption a bit clearer. Tweak
some messages to avoid plural problems.
Also make a code change for clarity. Using "conf" for "confidential"
is quite confusing. Using "conf_state" is perhaps not much better but
that's what the GSSAPI documentation uses, so there is at least some
hope of understanding it.
|
|
|
|
|
|
|
|
|
| |
smgrdounlinkfork() became dead code as the result of commit ece01aae47,
but it was left in place just in case we want it someday. However no users
have appeared in 7 years, so it's time to remove this unused function.
Author: Kirk Jamison
Discussion: https://www.postgresql.org/message-id/D09B13F772D2274BB348A310EE3027C64E2067@g01jpexmbkw24
|
|
|
|
|
| |
Author: Mark G <markg735@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAEeOP_YFVeFjq4zDZLDQbLSRFxBiTpwBQHxCNgGd%2Bp5VztTXyQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SQL spec defers to XQuery to define what the option flags are
for LIKE_REGEX patterns. XQuery says that:
* 's' allows the dot character to match newlines, which by
default it will not;
* 'm' allows ^ and $ to match at newlines, not only at the
start/end of the whole string.
Thus, these are *not* inverses as they are for the similarly-named
POSIX options, and neither one corresponds to the POSIX 'n' option.
Fortunately, Spencer's library does expose these two behaviors as
separately twiddlable flags, so we just have to fix the mapping from
JSP flag bits to REG flag bits. I also chose to rename the symbol
for 's' to DOTALL, to make it clearer that it's not the inverse
of MLINE.
Also, XQuery says that if the 'q' flag "is used together with the m, s,
or x flag, that flag has no effect". I read this as saying that 'q'
overrides the other flags; whoever wrote our code seems to have read
it backwards.
Lastly, while XQuery's 'x' flag is related to what Spencer's code
does for REG_EXPANDED, it's not the same or a subset. It seems best
to treat XQuery's 'x' as unimplemented for now. Maybe later we can
expand our regex code to offer 'x'-style parsing as a separate option.
While at it, refactor the jsonpath code so that (a) there's only
one copy of the flag transformation logic not two, and (b) the
processing of flags is independent of the order in which the flags
are written.
We need some documentation updates to go with this, but I'll
tackle that separately.
Back-patch to v12 where this code originated.
Discussion: https://postgr.es/m/CAPpHfdvDci4iqNF9fhRkTqhe-5_8HmzeLt56drH%2B_Rv2rNRqfg@mail.gmail.com
Reference: https://www.w3.org/TR/2017/REC-xpath-functions-31-20170321/#flags
|
|
|
|
|
|
|
| |
We believe that the issues that this was working around have been
fixed in MinGW more than 5 years ago, so this isn't necessary anymore.
Discussion: https://www.postgresql.org/message-id/flat/20190719050830.GK1859%40paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SQL Standard 2016 defines SSSSS format pattern for seconds past midnight in
jsonpath .datetime() method and CAST (... FORMAT ...) SQL clause. In our
datetime parsing engine we currently support it with SSSS name.
This commit adds SSSSS as an alias for SSSS. Alias is added in favor of
upcoming jsonpath .datetime() method. But it's also supported in to_date()/
to_timestamp() as positive side effect.
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Nikita Glukhov, Alexander Korotkov
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SQL Standard 2016 defines FF1-FF9 format patters for fractions of seconds in
jsonpath .datetime() method and CAST (... FORMAT ...) SQL clause. Parsing
engine of upcoming .datetime() method will be shared with to_date()/
to_timestamp().
This patch implements FF1-FF6 format patterns for upcoming jsonpath .datetime()
method. to_date()/to_timestamp() functions will also get support of this
format patterns as positive side effect. FF7-FF9 are not supported due to
lack of precision in our internal timestamp representation.
Extracted from original patch by Nikita Glukhov, Teodor Sigaev, Oleg Bartunov.
Heavily revised by me.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Discussion: https://postgr.es/m/CAPpHfdsZgYEra_PeCLGNoXOWYx6iU-S3wF8aX0ObQUcZU%2B4XTw%40mail.gmail.com
Author: Nikita Glukhov, Teodor Sigaev, Oleg Bartunov, Alexander Korotkov
Reviewed-by: Anastasia Lubennikova, Peter Eisentraut
|
|
|
|
|
| |
Noted by Coverity. Typo in 27cc7cd2b, so back-patch to v12
as that was.
|
|
|
|
|
|
|
|
| |
Add the comments that I griped were missing. Also re-order tests
so that parallelism-related tests aren't randomly separated from
each other.
Discussion: https://postgr.es/m/CAAaqYe9GD__4Crm=ddz+-XXcNhfY_V5gFYdLdmkFNq=2VHO56Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most WAL records are ignored in early SnapBuild snapshot build phases.
But it's critical to process some of them, so that later messages have
the correct transaction state after the snapshot is completely built; in
particular, XLOG_XACT_ASSIGNMENT messages are critical in order for
sub-transactions to be correctly assigned to their parent transactions,
or at least one assert misbehaves, as reported by Ildar Musin.
Diagnosed-by: Masahiko Sawada
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAONYFtOv+Er1p3WAuwUsy1zsCFrSYvpHLhapC_fMD-zNaRWxYg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The progress state was being clobbered once the first index completed
being rebuilt, causing the final phases of the operation not show
anything in the progress view. This was inadvertently broken in
03f9e5cba0ee, which added progress tracking for REINDEX.
(The reason this bugfix is this small is that I had already noticed this
problem when writing monitoring for CREATE INDEX, and had already worked
around it, as can be seen in discussion starting at
https://postgr.es/m/20190329150218.GA25010@alvherre.pgsql Fixing the
problem is just a matter of fixing one place touched by the REINDEX
monitoring.)
Reported by: Álvaro Herrera
Author: Álvaro Herrera
Discussion: https://postgr.es/m/20190801184333.GA21369@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
| |
Include newitemoff in rmgr desc output for nbtree page split records.
In passing, correct an obsolete comment that claimed that newitemoff is
only logged for _L variant nbtree page split WAL records.
Both issues were oversights in commit 2c03216d831, which revamped the
WAL format.
Author: Peter Geoghegan
Backpatch: 9.5-, where the WAL format was revamped.
|
| |
|
|
|
|
|
|
| |
Authored by Tom Lane, after a gripe from James Coleman.
Discussion: https://postgr.es/m/CAAaqYe9GD__4Crm=ddz+-XXcNhfY_V5gFYdLdmkFNq=2VHO56Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a second try at what commit 57431a911 tried to do, namely,
launch the syslogger before we open postmaster sockets so that our
messages about the sockets end up in the syslogger files. That
commit fell foul of a bunch of subtle issues caused by trying to
launch a postmaster child process before creating shared memory.
Rather than messing with that interaction, let's postpone opening
the sockets till after we launch the syslogger.
This would not have been terribly safe before commit 7de19fbc0,
because we relied on socket opening to detect whether any competing
postmasters were using the same port number. But now that we choose
IPC keys without regard to the port number, there's no interaction
to worry about.
Also delay creation of the external PID file (if requested) till after
the sockets are open, since external code could plausibly be relying
on that ordering of events. And postpone most of the work of
RemovePgTempFiles() so that that potentially-slow processing still
happens after we make the external PID file. We have to be a bit
careful about that last though: as noted in the discussion subsequent to
bug #15804, EXEC_BACKEND builds still have to clear the parameter-file
temp dir before launching the syslogger.
Patch by me; thanks to Michael Paquier for review/testing.
Discussion: https://postgr.es/m/15804-3721117bf40fb654@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When building statistics, we need to decide how many rows to sample and
how accurate the resulting statistics should be. Until now, it was not
possible to explicitly define statistics target for extended statistics
objects, the value was always computed from the per-attribute targets
with a fallback to the system-wide default statistics target.
That's a bit inconvenient, as it ties together the statistics target set
for per-column and extended statistics. In some cases it may be useful
to require larger sample / higher accuracy for extended statics (or the
other way around), but with this approach that's not possible.
So this commit introduces a new command, allowing to specify statistics
target for individual extended statistics objects, overriding the value
derived from per-attribute targets (and the system default).
ALTER STATISTICS stat_name SET STATISTICS target_value;
When determining statistics target for an extended statistics object we
first look at this explicitly set value. When this value is -1, we fall
back to the old formula, looking at the per-attribute targets first and
then the system default. This means the behavior is backwards compatible
with older PostgreSQL releases.
Author: Tomas Vondra
Discussion: https://postgr.es/m/20190618213357.vli3i23vpkset2xd@development
Reviewed-by: Kirk Jamison, Dean Rasheed
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Up to now, async.c scanned its whole array of per-backend state
whenever it needed to find listening backends. That's expensive
if MaxBackends is large, so extend the data structure with list
links that thread the active entries together.
A downside of this change is that asyncQueueUnregister (unregister
a listening backend at backend exit) now requires exclusive not shared
lock, and it can take awhile if there are many other listening
backends. We could improve the latter issue by using a doubly- not
singly-linked list, but it's probably not worth the storage space;
typical usage patterns for LISTEN/NOTIFY have fairly long-lived
listeners.
In return for that, Exec_ListenPreCommit (initially register a
listening backend), SignalBackends, and asyncQueueAdvanceTail
get significantly faster when MaxBackends is much larger than
the number of listening backends. If most of the potential
backend slots are listening, we don't win, but that's a case
where the actual interprocess-signal overhead is going to swamp
these considerations anyway.
Martijn van Oosterhout, hacked a bit more by me
Discussion: https://postgr.es/m/CADWG95vtRBFDdrx1JdT1_9nhOFw48KaeTev6F_LtDQAFVpSPhA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Assert that _bt_binsrch() binary searches with scantid set in insertion
scankey cannot be performed on leaf pages. Leaf-level binary searches
where scantid is set must use _bt_binsrch_insert() instead.
_bt_binsrch_insert() is likely to have additional responsibilities in
the future, such as searching within GIN-style posting lists using
scantid. It seems like a good idea to tighten things up now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In ad0bda5d24ea I changed the EvalPlanQual machinery to store
substitution tuples in slot, instead of using plain HeapTuples. The
main motivation for that was that using HeapTuples will be inefficient
for future tableams. But it turns out that that conversion was buggy
for non-locking rowmarks - the wrong tuple descriptor was used to
create the slot.
As a secondary issue 5db6df0c0 changed ExecLockRows() to begin EPQ
earlier, to allow to fetch the locked rows directly into the EPQ
slots, instead of having to copy tuples around. Unfortunately, as Tom
complained, that forces some expensive initialization to happen
earlier.
As a third issue, the test coverage for EPQ was clearly insufficient.
Fixing the first issue is unfortunately not trivial: Non-locked row
marks were fetched at the start of EPQ, and we don't have the type
information for the rowmarks available at that point. While we could
change that, it's not easy. It might be worthwhile to change that at
some point, but to fix this bug, it seems better to delay fetching
non-locking rowmarks when they're actually needed, rather than
eagerly. They're referenced at most once, and in cases where EPQ
fails, might never be referenced. Fetching them when needed also
increases locality a bit.
To be able to fetch rowmarks during execution, rather than
initialization, we need to be able to access the active EPQState, as
that contains necessary data. To do so move EPQ related data from
EState to EPQState, and, only for EStates creates as part of EPQ,
reference the associated EPQState from EState.
To fix the second issue, change EPQ initialization to allow use of
EvalPlanQualSlot() to be used before EvalPlanQualBegin() (but
obviously still requiring EvalPlanQualInit() to have been done).
As these changes made struct EState harder to understand, e.g. by
adding multiple EStates, significantly reorder the members, and add a
lot more comments.
Also add a few more EPQ tests, including one that fails for the first
issue above. More is needed.
Reported-By: yi huang
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/CAHU7rYZo_C4ULsAx_LAj8az9zqgrD8WDd4hTegDTMM1LMqrBsg@mail.gmail.com
https://postgr.es/m/24530.1562686693@sss.pgh.pa.us
Backpatch: 12-, where the EPQ changes were introduced
|
|
|
|
|
|
|
|
|
|
|
| |
f2e40380 introduces support of non-key attributes in GiST indexes. Then if
get_index_column_opclass() is asked by gistproperty() to get an opclass of
non-key column, it returns garbage past oidvector value. This commit fixes
that by making get_index_column_opclass() return InvalidOid in this case.
Discussion: https://postgr.es/m/20190902231948.GA5343%40alvherre.pgsql
Author: Nikita Glukhov, Alexander Korotkov
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some of these are quite old, but that doesn't make them not bugs.
We'd rather report a failure via elog than SIGSEGV.
While at it, uniformly spell the error check as !RelationIsValid(rel)
rather than a bare rel == NULL test. The machine code is the same
but it seems better to be consistent.
Coverity complained about this today, not sure why, because the
mistake is in fact old.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to implement NULL LAST semantic GiST previously assumed distance to
the NULL value to be Inf. However, our distance functions can return Inf and
NaN for non-null values. In such cases, NULL LAST semantic appears to be
broken. This commit fixes that by introducing separate array of null flags for
distances.
Backpatch to all supported versions.
Discussion: https://postgr.es/m/CAPpHfdsNvNdA0DBS%2BwMpFrgwT6C3-q50sFVGLSiuWnV3FqOJuQ%40mail.gmail.com
Author: Alexander Korotkov
Backpatch-through: 9.4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously plain float comparison was used in GiST pairing heap. Such
comparison doesn't provide proper ordering for value sets containing Inf and Nan
values. This commit fixes that by usage of float8_cmp_internal(). Note, there
is remaining problem with NULL distances, which are represented as Inf in
pairing heap. It would be fixes in subsequent commit.
Backpatch to all supported versions.
Reported-by: Andrey Borodin
Discussion: https://postgr.es/m/CAPpHfdsNvNdA0DBS%2BwMpFrgwT6C3-q50sFVGLSiuWnV3FqOJuQ%40mail.gmail.com
Author: Alexander Korotkov
Reviewed-by: Heikki Linnakangas
Backpatch-through: 9.4
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using COMMIT AND CHAIN or ROLLBACK AND CHAIN not in an explicit
transaction block, the previous implementation would leave a
transaction block active in the ROLLBACK case but not the COMMIT case.
To fix for now, error out when using these commands not in an explicit
transaction block. This restriction could be lifted if a sensible
definition and implementation is found.
Bug: #15977
Author: fn ln <emuser20140816@gmail.com>
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 6f6b99d13 stuck an INFO message into the fast path for
checking partition constraints, for no very good reason except
that it made it easy for the regression tests to verify that
that path was taken. Assorted later patches did likewise,
increasing the unsuppressable-chatter level from ALTER TABLE
even more. This isn't good for the user experience, so let's
drop these messages down to DEBUG1 where they belong. So as
not to have a loss of test coverage, create a TAP test that
runs the relevant queries with client_min_messages = DEBUG1
and greps for the expected messages.
This testing method is a bit brute-force --- in particular,
it duplicates the execution of a fair amount of the core
create_table and alter_table tests. We experimented with
other solutions, but running any significant amount of
standard testing with client_min_messages = DEBUG1 seems
to have a lot of output-stability pitfalls, cf commits
bbb96c370 and 5655565c0. Possibly at some point we'll look
into whether we can reduce the amount of test duplication.
Backpatch into v12, because some of these messages are new
in v12 and we don't really want to ship it that way.
Sergei Kornilov
Discussion: https://postgr.es/m/81911511895540@web58j.yandex.ru
Discussion: https://postgr.es/m/4859321552643736@myt5-02b80404fd9e.qloud-c.yandex.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As a result of some long-ago quick hacks, the SIMILAR TO operator
and the corresponding flavor of substring() interpreted "ESCAPE NULL"
as selecting the default escape character '\'. This is both
surprising and not per spec: the standard is clear that these
functions should return NULL for NULL input.
Additionally, because of inconsistency of the strictness markings
of 3-argument substring() and similar_escape(), the planner could not
inline the SQL definition of substring(), resulting in a substantial
performance penalty compared to the underlying POSIX substring()
function.
The simplest fix for this would be to change the strictness marking
of similar_escape(), but if we do that we risk breaking existing views
that depend on that function. Hence, leave similar_escape() as-is
as a compatibility function, and instead invent a new function
similar_to_escape() that comes in two strict variants.
There are a couple of other behaviors in this area that are also
not per spec, but they are documented and seem generally at least
as sane as the spec's definition, so leave them alone. But improve
the documentation to describe them fully.
Patch by me; thanks to Álvaro Herrera and Andrew Gierth for review
and discussion.
Discussion: https://postgr.es/m/14047.1557708214@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This moves much of the non-heap-specific logic from toast_delete and
toast_insert_or_update into a helper functions accessible via a new
header, toast_helper.h. Using the functions in this module, a table
AM can implement creation and deletion of TOAST table rows with
much less code duplication than was possible heretofore. Some
table AMs won't want to use the TOAST logic at all, but for those
that do this will make that easier.
Patch by me, reviewed and tested by Prabhat Sabu, Thomas Munro,
Andres Freund, and Álvaro Herrera.
Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old code didn't differentiate between a read error and a
concurrent truncation. fread reports both of these by returning 0;
you have to use feof() or ferror() to distinguish between them,
which this code did not do.
It might be a better idea to use read() rather than fread() here,
so that we can display a less-generic error message, but I'm not
sure that would qualify as a back-patchable bug fix, so just do
this much for now.
Jeevan Chalke, reviewed by Jeevan Ladhe and by me.
Discussion: http://postgr.es/m/CA+TgmobG4ywMzL5oQq2a8YKp8x2p3p1LOMMcGqpS7aekT9+ETA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously even if postmaster died and WaitLatch() woke up with that event
while pg_promote() was waiting for the standby promotion to finish,
pg_promote() did nothing special and kept waiting until timeout occurred.
This could cause a busy loop.
This patch make pg_promote() return false immediately when postmaster
dies, to avoid such a busy loop.
Back-patch to v12 where pg_promote() was added.
Author: Fujii Masao
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAHGQGwEs9ROgSp+QF+YdDU+xP8W=CY1k-_Ov-d_Z3JY+to3eXA@mail.gmail.com
|