| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DTK_DOW/DTK_ISODOW and DTK_DOY switch cases in timestamp_part() and
timestamptz_part() contained calls of timestamp2tm() that were fully
redundant with the ones done just above the switch. This evidently crept
in during commit 258ee1b63, which relocated that code from another place
where the calls were indeed needed. Just delete the redundant calls.
I (tgl) noted that our test coverage of these functions left quite a
bit to be desired, so extend timestamp.sql and timestamptz.sql to
cover all the branches.
Back-patch to all supported branches, as the previous commit was.
There's no real issue here other than some wasted cycles in some
not-too-heavily-used code paths, but the test coverage seems valuable.
Report and patch by Li Japin; test case adjustments by me.
Discussion: https://postgr.es/m/SG2PR06MB37762CAE45DB0F6CA7001EA9B6550@SG2PR06MB3776.apcprd06.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
| |
Commit f13ea95f9e moved the description of postmaster.pid file contents
from miscadmin.h to pidfile.h, but missed to update the comments in
miscinit.c.
Author: Hadi Moshayedi
Reviewed-by: Amit Kapila
Backpatch-through: 10
Discussion: https://postgr.es/m/CAK=1=WpYEM9x3LGkaxgXaxeYQjnkdW8XLsxrYRTE2Gq-H83FMw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We implement ON COMMIT DELETE ROWS by truncating tables marked that
way, which requires also truncating/rebuilding their indexes. But
RelationTruncateIndexes asks the relcache for up-to-date copies of any
index expressions, which may cause execution of eval_const_expressions
on them, which can result in actual execution of subexpressions.
This is a bad thing to have happening during ON COMMIT. Manuel Rigger
reported that use of a SQL function resulted in crashes due to
expectations that ActiveSnapshot would be set, which it isn't.
The most obvious fix perhaps would be to push a snapshot during
PreCommit_on_commit_actions, but I think that would just open the door
to more problems: CommitTransaction explicitly expects that no
user-defined code can be running at this point.
Fortunately, since we know that no tuples exist to be indexed, there
seems no need to use the real index expressions or predicates during
RelationTruncateIndexes. We can set up dummy index expressions
instead (we do need something that will expose the right data type,
as there are places that build index tupdescs based on this), and
just ignore predicates and exclusion constraints.
In a green field it'd likely be better to reimplement ON COMMIT DELETE
ROWS using the same "init fork" infrastructure used for unlogged
relations. That seems impractical without catalog changes though,
and even without that it'd be too big a change to back-patch.
So for now do it like this.
Per private report from Manuel Rigger. This has been broken forever,
so back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The fix for CVE-2017-7484 disallowed use of pg_statistic data for
planning purposes if the user would not be able to select the associated
column and a non-leakproof function is to be applied to the statistics
values. That turns out to disable use of pg_statistic data in some
common cases involving inheritance/partitioning, where the user does
have permission to select from the parent table that was actually named
in the query, but not from a child table whose stats are needed. Since,
in non-corner cases, the user *can* select the child table's data via
the parent, this restriction is not actually useful from a security
standpoint. Improve the logic so that we also check the permissions of
the originally-named table, and allow access if select permission exists
for that.
When checking access to stats for a simple child column, we can map
the child column number back to the parent, and perform this test
exactly (including not allowing access if the child column isn't
exposed by the parent). For expression indexes, the current logic
just insists on whole-table select access, and this patch allows
access if the user can select the whole parent table. In principle,
if the child table has extra columns, this might allow access to
stats on columns the user can't read. In practice, it's unlikely
that the planner is going to do any stats calculations involving
expressions that are not visible to the query, so we'll ignore that
fine point for now. Perhaps someday we'll improve that logic to
detect exactly which columns are used by an expression index ...
but today is not that day.
Back-patch to v11. The issue was created in 9.2 and up by the
CVE-2017-7484 fix, but this patch depends on the append_rel_array[]
planner data structure which only exists in v11 and up. In
practice the issue is most urgent with partitioned tables, so
fixing v11 and later should satisfy much of the practical need.
Dilip Kumar and Amit Langote, with some kibitzing by me
Discussion: https://postgr.es/m/3876.1531261875@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While a self-referential view doesn't actually work, it's possible
to create one, and it turns out that this breaks some of the
information_schema views. Those views call relation_is_updatable(),
which neglected to consider the hazards of being recursive. In
older PG versions you get a "stack depth limit exceeded" error,
but since v10 it'd recurse to the point of stack overrun and crash,
because commit a4c35ea1c took out the expression_returns_set() call
that was incidentally checking the stack depth.
Since this function is only used by information_schema views, it
seems like it'd be better to return "not updatable" than suffer
an error. Hence, add tracking of what views we're examining,
in just the same way that the nearby fireRIRrules() code detects
self-referential views. I added a check_stack_depth() call too,
just to be defensive.
Per private report from Manuel Rigger. Back-patch to all
supported versions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Trying to use hypothetical indexes with BRIN currently fails when trying
to access a relation that does not exist when looking for the
statistics. With the current API, it is not possible to easily pass
a value for pages_per_range down to the hypothetical index, so this
makes use of the default value of BRIN_DEFAULT_PAGES_PER_RANGE, which
should be fine enough in most cases.
Being able to refine or enforce the hypothetical costs in more
optimistic ways would require more refactoring by filling in the
statistics when building IndexOptInfo in plancat.c. This would involve
ABI breakages around the costing routines, something not fit for stable
branches.
This is broken since 7e534ad, so backpatch down to v10.
Author: Julien Rouhaud, Heikki Linnakangas
Reviewed-by: Álvaro Herrera, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CAOBaU_ZH0LKEA8VFCocr6Lpte1ab0b6FpvgS0y4way+RPSXfYg@mail.gmail.com
Backpatch-through: 10
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The planner's optimization code for LIKE and regex operators could
error out with a complaint like "no = operator for opfamily NNN"
if someone created a binary-compatible index (for example, a
bpchar_ops index on a text column) on the LIKE's left argument.
This is a consequence of careless refactoring in commit 74dfe58a5.
The old code in match_special_index_operator only accepted specific
combinations of the pattern operator and the index opclass, thereby
indirectly guaranteeing that the opclass would have a comparison
operator with the same LHS input type as the pattern operator.
While moving the logic out to a planner support function, I simplified
that test in a way that no longer guarantees that. Really though we'd
like an altogether weaker dependency on the opclass, so rather than
put back exactly the old code, just allow lookup failure. I have in
mind now to rewrite this logic completely, but this is the minimum
change needed to fix the bug in v12.
Per report from Manuel Rigger. Back-patch to v12 where the mistake
came in.
Discussion: https://postgr.es/m/CA+u7OA7nnGYy8rY0vdTe811NuA+Frr9nbcBO9u2Z+JxqNaud+g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out that commit e9f1c01b7 missed a case: we must print a
VALUES clause in long format if get_query_def is given a resultDesc
that would require the query's output column name(s) to be different
from what the bare VALUES clause would produce.
This applies in case an ALTER ... RENAME COLUMN has been done to
a view that formerly could be printed in simple format, as shown
in the added regression test case. It also explains bug #16119
from Dmitry Telpt, because it turns out that (unlike CREATE VIEW)
CREATE MATERIALIZED VIEW fails to apply any column aliases it's
given to the stored ON SELECT rule. So to get them to be printed,
we have to account for the resultDesc renaming. It might be worth
changing the matview code so that it creates the ON SELECT rule
with the correct aliases; but we'd still need these messy checks in
get_simple_values_rte to handle the case of a subsequent column
rename, so any such change would be just neatnik-ism not a bug fix.
Like the previous patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/16119-e64823f30a45a754@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When estimating number of distinct groups, we failed to ignore system
attributes when matching the group expressions to mvdistinct stats,
causing failures like
ERROR: negative bitmapset member not allowed
Fix that by simply skipping anything that is not a regular attribute.
Backpatch to PostgreSQL 10, where the extended stats were introduced.
Bug: #16111
Reported-by: Tuomas Leikola
Author: Tomas Vondra
Backpatch-through: 10
Discussion: https://postgr.es/m/16111-687799584c3a7e73@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
We should throw an error for indeterminate collation, but bpcharne()
was missing that logic, resulting in a much less user-friendly error
(either an assertion failure or "cache lookup failed for collation 0").
Per report from Manuel Rigger. Back-patch to v12 where the mistake
came in, evidently in commit 5e1963fb7. (Before non-deterministic
collations, this function wasn't collation sensitive.)
Discussion: https://postgr.es/m/CA+u7OA4HOjtymxAbuGNh4-X_2R0Lw5n01tzvP8E5-i-2gQXYWA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adopts the overflow check logic introduced by commit cbdb8b4c0
into two more places. interval_mul() failed to notice if it computed a
new microseconds value that was one more than INT64_MAX, and pgbench's
double-to-int64 logic had the same sorts of edge-case problems that
cbdb8b4c0 fixed in the core code.
To make this easier to get right in future, put the guts of the checks
into new macros in c.h, and add commentary about how to use the macros
correctly.
Back-patch to all supported branches, as we did with the previous fix.
Yuya Watari
Discussion: https://postgr.es/m/CAJ2pMkbkkFw2hb9Qb1Zj8d06EhWAQXFLy73St4qWv6aX=vqnjw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit d26a810eb, we've defined bool as being either _Bool from
<stdbool.h>, or "unsigned char"; but that commit overlooked the fact
that probes.d has "#define bool char". For consistency, make it say
"unsigned char" instead. This should be strictly a cosmetic change,
but it seems best to be in sync.
Formally, in the now-normal case where we're using <stdbool.h>, it'd
be better to write "#define bool _Bool". However, then we'd need
some build infrastructure to inject that configuration choice into
probes.d, and it doesn't seem worth the trouble. We only use
<stdbool.h> if sizeof(_Bool) is 1, so having DTrace think that
bool parameters are "unsigned char" should be close enough.
Back-patch to v12 where d26a810eb came in.
Discussion: https://postgr.es/m/CAA4eK1LmaKO7Du9M9Lo=kxGU8sB6aL8fa3sF6z6d5yYYVe3BuQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Rearrange the logic in record_image_cmp() and datum_image_eq() to
error out on unexpected typlens (either not supported there or
completely invalid due to corruption). Barring corruption, this is
not possible today but it seems more future-proof and robust to fix
this.
Reported-by: Peter Geoghegan <pg@bowt.ie>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 9556aa01c rearranged the innards of text_position() in a way
that would make it not work for empty search strings. Which is fine,
because all callers of that code special-case an empty pattern in
some way. However, the primary use-case (text_position itself) got
special-cased incorrectly: historically it's returned 1 not 0 for
an empty search string. Restore the historical behavior.
Per complaint from Austin Drenski (via Shay Rojansky).
Back-patch to v12 where it got broken.
Discussion: https://postgr.es/m/CADT4RqAz7oN4vkPir86Kg1_mQBmBxCp-L_=9vRpgSNPJf0KRkw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise it can be hard to see where an error is coming from, when
the parallel worker sets all the GUCs that it received from the
leader. Bug #15726. Back-patch to 9.5, where RestoreGUCState()
appeared.
Reported-by: Tiago Anastacio
Reviewed-by: Daniel Gustafsson, Tom Lane
Discussion: https://postgr.es/m/15726-6d67e4fa14f027b3%40postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the introduction of different slot types, in 1a0586de3657, we
create a virtual slot in tuplesort_begin_cluster(). While that looks
right, it unfortunately doesn't actually work, as ExecStoreHeapTuple()
is used to store tuples in the slot. Unfortunately no regression tests
for CLUSTER on expression indexes existed so far.
Fix the slot type, and add bare bones tests for CLUSTER on expression
indexes.
Reported-By: Justin Pryzby
Author: Andres Freund
Discussion: https://postgr.es/m/20191011210320.GS10470@telsasoft.com
Backpatch: 12, like 1a0586de3657
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 5ac0d9360 failed to entirely fix bitshiftright's habit of
leaving one-bits in the pad space that should be all zeroes,
because in a moment of sheer brain fade I'd concluded that only
the code path used for not-a-multiple-of-8 shift distances needed
to be fixed. Of course, a multiple-of-8 shift distance can also
cause the problem, so we need to forcibly zero the extra bits
in both cases.
Per bug #16037 from Alexander Lakhin. As before, back-patch to all
supported branches.
Discussion: https://postgr.es/m/16037-1d1ebca564db54f4@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Encoding conversion uses the very simplistic rule that the output
can't be more than 4X longer than the input, and palloc's a buffer
of that size. This results in failure to convert any string longer
than 1/4 GB, which is becoming an annoying limitation.
As a band-aid to improve matters, allow the allocated output buffer
size to exceed 1GB. We still insist that the final result fit into
MaxAllocSize (1GB), though. Perhaps it'd be safe to relax that
restriction, but it'd require close analysis of all callers, which
is daunting (not least because external modules might call these
functions). For the moment, this should allow a 2X to 4X improvement
in the longest string we can convert, which is a useful gain in
return for quite a simple patch.
Also, once we have successfully converted a long string, repalloc
the output down to the actual string length, returning the excess
to the malloc pool. This seems worth doing since we can usually
expect to give back several MB if we take this path at all.
This still leaves much to be desired, most notably that the assumption
that MAX_CONVERSION_GROWTH == 4 is very fragile, and yet we have no
guard code verifying that the output buffer isn't overrun. Fixing
that would require significant changes in the encoding conversion
APIs, so it'll have to wait for some other day.
The present patch seems safely back-patchable, so patch all supported
branches.
Alvaro Herrera and Tom Lane
Discussion: https://postgr.es/m/20190816181418.GA898@alvherre.pgsql
Discussion: https://postgr.es/m/3614.1569359690@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Up to now, if you resized a large (>8K) palloc chunk down to a smaller
size, aset.c made no attempt to return any space to the malloc pool.
That's unpleasant if a really large allocation is resized to a
significantly smaller size. I think no such cases existed when this
code was designed, and I'm not sure whether they're common even yet,
but an upcoming fix to encoding conversion will certainly create such
cases. Therefore, fix AllocSetRealloc so that it gives realloc()
a chance to do something with the block. This doesn't noticeably
increase complexity, we mostly just have to change the order in which
the cases are considered.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/20190816181418.GA898@alvherre.pgsql
Discussion: https://postgr.es/m/3614.1569359690@sss.pgh.pa.us
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the bitstring length is not a multiple of 8, we'd shift the
rightmost bits into the pad space, which must be zeroes --- bit_cmp,
for one, depends on that. This'd lead to the result failing to
compare equal to what it should compare equal to, as reported in
bug #16013 from Daryl Waycott.
This is, if memory serves, not the first such bug in the bitstring
functions. In hopes of making it the last one, do a bit more work
than minimally necessary to fix the bug:
* Add assertion checks to bit_out() and varbit_out() to complain if
they are given incorrectly-padded input. This will improve the
odds that manual testing of any new patch finds problems.
* Encapsulate the padding-related logic in macros to make it
easier to use.
Also, remove unnecessary padding logic from bit_or() and bitxor().
Somebody had already noted that we need not re-pad the result of
bit_and() since the inputs are required to be the same length,
but failed to extrapolate that to the other two.
Also, move a comment block that once was near the head of varbit.c
(but people kept putting other stuff in front of it), to put it in
the header block.
Note for the release notes: if anyone has inconsistent data as a
result of saving the output of bitshiftright() in a table, it's
possible to fix it with something like
UPDATE mytab SET bitcol = ~(~bitcol) WHERE bitcol != ~(~bitcol);
This has been broken since day one, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/16013-c2765b6996aacae9@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we introduced the idea of leakproof functions, texteq and textne
were marked leakproof but their sibling text comparison functions were
not. This inconsistency seemed justified because texteq/textne just
relied on memcmp() and so could easily be seen to be leakproof, while
the other comparison functions are far more complex and indeed can
throw input-dependent errors.
However, that argument crashed and burned with the addition of
nondeterministic collations, because now texteq/textne may invoke
the exact same varstr_cmp() infrastructure as the rest. It makes no
sense whatever to give them different leakproofness markings.
After a certain amount of angst we've concluded that it's all right
to consider varstr_cmp() to be leakproof, mostly because the other
choice would be disastrous for performance of many queries where
leakproofness matters. The input-dependent errors should only be
reachable for corrupt input data, or so we hope anyway; certainly,
if they are reachable in practice, we've got problems with requirements
as basic as maintaining a btree index on a text column.
Hence, run around to all the SQL functions that derive from varstr_cmp()
and mark them leakproof. This should result in a useful gain in
flexibility/performance for queries in which non-leakproofness degrades
the efficiency of the query plan.
Back-patch to v12 where nondeterministic collations were added.
While this isn't an essential bug fix given the determination
that varstr_cmp() is leakproof, we might as well apply it now that
we've been forced into a post-beta4 catversion bump.
Discussion: https://postgr.es/m/31481.1568303470@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
text_pattern_ops and its siblings can't be used with nondeterministic
collations, because they use the text_eq operator which will not behave
as bitwise equality if applied with a nondeterministic collation. The
initial implementation of that restriction was to insert a run-time test
in the related comparison functions, but that is inefficient, may throw
misleading errors, and will throw errors in some cases that would work.
It seems sufficient to just prevent the combination during CREATE INDEX,
so do that instead.
Lacking any better way to identify the opclasses involved, we need to
hard-wire tests for them, which requires hand-assigned values for their
OIDs, which forces a catversion bump because they previously had OIDs
that would be assigned automatically. That's slightly annoying in the
v12 branch, but fortunately we're not at rc1 yet, so just do it.
Back-patch to v12 where nondeterministic collations were added.
In passing, run make reformat-dat-files, which found some unrelated
whitespace issues (slightly different ones in HEAD and v12).
Peter Eisentraut, with small corrections by me
Discussion: https://postgr.es/m/22566.1568675619@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although the SQL/JSON tech report makes reference to ECMAScript which
allows both single- and double-quoted strings, all the rest of the
report speaks only of double-quoted string literals in jsonpaths.
That's more compatible with JSON itself; moreover single-quoted strings
are hard to use inside a jsonpath that is itself a single-quoted SQL
literal. So guess that the intent is to allow only double-quoted
literals, and remove lexer support for single-quoted literals.
It'll be less painful to add this again later if we're wrong, than to
remove a shipped feature.
Also, adjust the lexer so that unrecognized backslash sequences are
treated as just meaning the escaped character, not as errors. This
change has much better support in the standards, as JSON, JavaScript
and ECMAScript all make it plain that that's what's supposed to
happen.
Back-patch to v12.
Discussion: https://postgr.es/m/CAPpHfdvDci4iqNF9fhRkTqhe-5_8HmzeLt56drH%2B_Rv2rNRqfg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SQL spec defers to XQuery to define what the option flags are
for LIKE_REGEX patterns. XQuery says that:
* 's' allows the dot character to match newlines, which by
default it will not;
* 'm' allows ^ and $ to match at newlines, not only at the
start/end of the whole string.
Thus, these are *not* inverses as they are for the similarly-named
POSIX options, and neither one corresponds to the POSIX 'n' option.
Fortunately, Spencer's library does expose these two behaviors as
separately twiddlable flags, so we just have to fix the mapping from
JSP flag bits to REG flag bits. I also chose to rename the symbol
for 's' to DOTALL, to make it clearer that it's not the inverse
of MLINE.
Also, XQuery says that if the 'q' flag "is used together with the m, s,
or x flag, that flag has no effect". I read this as saying that 'q'
overrides the other flags; whoever wrote our code seems to have read
it backwards.
Lastly, while XQuery's 'x' flag is related to what Spencer's code
does for REG_EXPANDED, it's not the same or a subset. It seems best
to treat XQuery's 'x' as unimplemented for now. Maybe later we can
expand our regex code to offer 'x'-style parsing as a separate option.
While at it, refactor the jsonpath code so that (a) there's only
one copy of the flag transformation logic not two, and (b) the
processing of flags is independent of the order in which the flags
are written.
We need some documentation updates to go with this, but I'll
tackle that separately.
Back-patch to v12 where this code originated.
Discussion: https://postgr.es/m/CAPpHfdvDci4iqNF9fhRkTqhe-5_8HmzeLt56drH%2B_Rv2rNRqfg@mail.gmail.com
Reference: https://www.w3.org/TR/2017/REC-xpath-functions-31-20170321/#flags
|
|
|
|
|
|
|
|
|
|
|
| |
f2e40380 introduces support of non-key attributes in GiST indexes. Then if
get_index_column_opclass() is asked by gistproperty() to get an opclass of
non-key column, it returns garbage past oidvector value. This commit fixes
that by making get_index_column_opclass() return InvalidOid in this case.
Discussion: https://postgr.es/m/20190902231948.GA5343%40alvherre.pgsql
Author: Nikita Glukhov, Alexander Korotkov
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The leak happens in str_tolower, str_toupper and str_initcap, which are
used in several places including their equivalent SQL-level functions,
and can only be triggered when using an ICU-provided collation when
converting the input string.
b615920 fixed a similar leak. Backpatch down 10 where ICU collations
have been introduced.
Author: Konstantin Knizhnik
Discussion: https://postgr.es/m/94c0ad0a-cbc2-e4a3-7829-2bdeaf9146db@postgrespro.ru
Backpatch-through: 10
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RelationAllowsEarlyPruning() performed a catalog scan, but is used
in two contexts where that was a bad idea:
1. In heap_page_prune_opt(), which runs very frequently in some large
scans. This caused major performance problems in a field report
that was easy to reproduce.
2. In TestForOldSnapshot(), which runs while we hold a buffer content
lock. It's not clear if this was guaranteed to be free of buffer
deadlock risk.
The check was introduced in commit 2cc41acd8 and defended against a
real problem: 9.6's hash indexes have no page LSN and so we can't
allow early pruning (ie the snapshot-too-old feature). We can remove
the check from all later releases though: hash indexes are now logged,
and there is no way to create UNLOGGED indexes on regular logged
tables.
If a future release allows such a combination, it might need to put
a similar check in place, but it'll need some more thought.
Back-patch to 10.
Author: Thomas Munro
Reviewed-by: Tom Lane, who spotted the second problem
Discussion: https://postgr.es/m/CA%2BhUKGKT8oTkp5jw_U4p0S-7UG9zsvtw_M47Y285bER6a2gD%2Bg%40mail.gmail.com
Discussion: https://postgr.es/m/CAA4eK1%2BWy%2BN4eE5zPm765h68LrkWc3Biu_8rzzi%2BOYX4j%2BiHRw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An empty file name or subdirectory name leads join_path_components() to
just produce the parent directory name, which leads to weird failures or
recursive inclusions. Let's throw a specific error for that. It takes
only slightly more code to detect all-blank names, so do so.
Also, detect direct recursion, ie a file calling itself. As coded
this will also detect recursion via "include_dir '.'", which is
perhaps more likely than explicitly including the file itself.
Detecting indirect recursion would require API changes for guc-file.l
functions, which seems not worth it since extensions might call them.
The nesting depth limit will catch such cases eventually, just not
with such an on-point error message.
In passing, adjust the example usages in postgresql.conf.sample
to perhaps eliminate the problem at the source: there's no reason
for the examples to suggest that an empty value is valid.
Per a trouble report from Brent Bates. Back-patch to 9.5; the
issue is old, but the code in 9.4 is enough different that the
patch doesn't apply easily, and it doesn't seem worth the trouble
to fix there.
Ian Barwick and Tom Lane
Discussion: https://postgr.es/m/8c8bcbca-3bd9-dc6e-8986-04a5abdef142@2ndquadrant.com
|
|
|
|
|
|
| |
There were some minor differences that didn't seem necessary.
Discussion: https://www.postgresql.org/message-id/flat/86b67eef-bb26-c97d-3e35-64f1fbd4f9fe%402ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the record argument is NULL and has no declared type more concrete
than RECORD, we can't extract useful information about the desired
rowtype from it. In this case, see if we're in FROM with an AS clause,
and if so extract the needed rowtype info from AS.
It worked like this before v11, but commit 37a795a60 removed the
behavior, reasoning that it was undocumented, inefficient, and utterly
not self-consistent. If you want to take type info from an AS clause,
you should be using the json_to_record() family of functions not the
json_populate_record() family. Also, it was already the case that
the "populate" functions would fail for a null-valued RECORD input
(with an unfriendly "record type has not been registered" error)
when there wasn't an AS clause at hand, and it wasn't obvious that
that behavior wasn't OK when there was one. However, it emerges
that some people were depending on this to work, and indeed the
rather off-point error message you got if you left off AS encouraged
slapping on AS without switching to the json_to_record() family.
Hence, put back the fallback behavior of looking for AS. While at it,
improve the run-time error you get when there's no place to obtain type
info; we can do a lot better than "record type has not been registered".
(We can't, unfortunately, easily improve the parse-time error message
that leads people down this path in the first place.)
While at it, I refactored the code a bit to avoid duplicating the
same logic in several different places.
Per bug #15940 from Jaroslav Sivy. Back-patch to v11 where the
current coding came in. (The pre-v11 deficiencies in this area
aren't regressions, so we'll leave those branches alone.)
Patch by me, based on preliminary analysis by Dmitry Dolgov.
Discussion: https://postgr.es/m/15940-2ab76dc58ffb85b6@postgresql.org
|
|
|
|
|
|
|
| |
Reported-By: Heikki Linnakangas
Author: Michael Paquier
Discussion: https://postgr.es/m/d6ffbebb-a0d2-181c-811d-b029b2225ed7@iki.fi
Backpatch: 12-, where pluggable table access methods were introduced
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 4b93f5799 rearranged things in plpgsql to make it cope better with
composite types changing underneath it intra-session. However, I failed to
consider the case of a composite type being dropped and recreated entirely.
In my defense, the previous coding didn't consider that possibility at all
either --- but it would accidentally work so long as you didn't change the
type's field list, because the built-at-compile-time list of component
variables would then still match the type's new definition. The new
coding, however, occasionally tries to re-look-up the type by OID, and
then fails to find the dropped type.
To fix this, we need to save the TypeName struct, and then redo the type
OID lookup from that. Of course that's expensive, so we don't want to do
it every time we need the type OID. This can be fixed in the same way that
4b93f5799 dealt with changes to composite types' definitions: keep an eye
on the type's typcache entry to see if its tupledesc has been invalidated.
(Perhaps, at some point, this mechanism should be generalized so it can
work for non-composite types too; but for now, plpgsql only tries to
cope with intra-session redefinitions of composites.)
I'm slightly hesitant to back-patch this into v11, because it changes
the contents of struct PLpgSQL_type as well as the signature of
plpgsql_build_datatype(), so in principle it could break code that is
poking into the innards of plpgsql. However, the only popular extension
of that ilk is pldebugger, and it doesn't seem to be affected. Since
this is a regression for people who were relying on the old behavior,
it seems worth taking the small risk of causing compatibility issues.
Per bug #15913 from Daniel Fiori. Back-patch to v11 where 4b93f5799
came in.
Discussion: https://postgr.es/m/15913-a7e112e16dedcffc@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ALTER SYSTEM itself normally won't make duplicate entries (although
up till this patch, it was possible to confuse it by writing case
variants of a GUC's name). However, if some external tool has appended
entries to the file, that could result in duplicate entries for a single
GUC name. In such a situation, ALTER SYSTEM did exactly the wrong thing,
because it replaced or removed only the first matching entry, leaving
the later one(s) still there and hence still determining the active value.
This patch fixes that by making ALTER SYSTEM sweep through the file and
remove all matching entries, then (if not ALTER SYSTEM RESET) append the
new setting to the end. This means entries will be in order of last
setting rather than first setting, but that shouldn't hurt anything.
Also, make the comparisons case-insensitive so that the right things
happen if you do, say, ALTER SYSTEM SET "TimeZone" = 'whatever'.
This has been broken since ALTER SYSTEM was invented, so back-patch
to all supported branches.
Ian Barwick, with minor mods by me
Discussion: https://postgr.es/m/aed6cc9f-98f3-2693-ac81-52bb0052307e@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As coded, the ICU-collation path in pattern_char_isalpha() failed
to consider regular ASCII letters to be case-varying. This led to
like_fixed_prefix treating too much of an ILIKE pattern as being a
fixed prefix, so that indexscans derived from an ILIKE clause might
miss entries that they should find.
Per bug #15892 from James Inform. This is an oversight in the original
ICU patch (commit eccfef81e), so back-patch to v10 where that came in.
Discussion: https://postgr.es/m/15892-e5d2bea3e8a04a1b@postgresql.org
|
|
|
|
|
|
|
|
|
| |
Take into account pg_server_to_any() may return input string "as is".
Reported-by: Andrew Dunstan, Thomas Munro
Discussion: https://postgr.es/m/0ed83a33-d900-466a-880a-70ef456c721f%402ndQuadrant.com
Author: Alexander Korotkov, Thomas Munro
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have implemented jsonpath string comparison using default database locale.
However, standard requires us to compare Unicode codepoints. This commit
implements that, but for performance reasons we still use per-byte comparison
for "==" operator. Thus, for consistency other comparison operators do per-byte
comparison if Unicode codepoints appear to be equal.
In some edge cases, when same Unicode codepoints have different binary
representations in database encoding, we diverge standard to achieve better
performance of "==" operator. In future to implement strict standard
conformance, we can do normalization of input JSON strings.
Original patch was written by Nikita Glukhov, rewritten by me.
Reported-by: Markus Winand
Discussion: https://postgr.es/m/8B7FA3B4-328D-43D7-95A8-37B8891B8C78%40winand.at
Author: Nikita Glukhov, Alexander Korotkov
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When parsing a timetz string with a dynamic timezone abbreviation or a
timezone not specified, it was possible to generate incorrect timestamps
based on a date which uses some non-initialized variables if the input
string did not specify fully a date to parse. This is already checked
when a full timezone spec is included in the input string, but the two
other cases mentioned above missed the same checks.
This gets fixed by generating an error as this input is invalid, or in
short when a date is not fully specified.
Valgrind was complaining about this problem.
Bug: #15910
Author: Alexander Lakhin
Discussion: https://postgr.es/m/15910-2eba5106b9aa0c61@postgresql.org
Backpatch-through: 9.4
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit aa27977fe21a7dfa4da4376ad66ae37cb8f0d0b5 introduced this
restriction for pg_temp.function_name(arg); do likewise for types
created in temporary schemas. Programs that this breaks should add
"pg_temp." schema qualification or switch to arg::type_name syntax.
Back-patch to 9.4 (all supported versions).
Reviewed by Tom Lane. Reported by Tom Lane.
Security: CVE-2019-10208
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 88bdbd3f746049834ae3cc972e6e650586ec3c9d.
As committed, statement sampling used the existing duration threshold
(log_min_duration_statement) when decide which statements to sample.
The issue is that even the longest statements are subject to sampling,
and so may not end up logged. An improvement was proposed, introducing
a second duration threshold, but it would not be backwards compatible.
So we've decided to revert this feature - the separate threshold should
be part of the feature itself.
Discussion: https://postgr.es/m/CAFj8pRDS8tQ3Wviw9%3DAvODyUciPSrGeMhJi_WPE%2BEB8%2B4gLL-Q%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These were introduced by pgindent due to fixe to broken
indentation (c.f. 8255c7a5eeba8). Previously the mis-indentation of
function prototypes was creatively used to reduce indentation in a few
places.
As that formatting only exists in master and REL_12_STABLE, it seems
better to fix it in both, rather than having some odd indentation in
v12 that somebody might copy for future patches or such.
Author: Andres Freund
Discussion: https://postgr.es/m/20190728013754.jwcbe5nfyt3533vx@alap3.anarazel.de
Backpatch: 12-
|
|
|
|
|
|
|
|
|
|
|
| |
The rd_amcache allows an index AM to cache arbitrary information in a
relcache entry. This commit moves the cleanup of rd_amcache so that it
can also be used by table AMs. Nothing takes advantage of that yet, but
I'm sure it'll come handy for anyone writing new table AMs.
Backpatch to v12, where table AM interface was introduced.
Reviewed-by: Julien Rouhaud
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pg_timezone_names() tries to avoid showing the "Factory" zone in
the view, mainly because that has traditionally had a very long
"abbreviation" such as "Local time zone must be set--see zic manual page",
so that showing it messes up psql's formatting of the whole view.
Since tzdb version 2016g, IANA instead uses the abbreviation "-00",
which is sane enough that there's no reason to discriminate against it.
On the other hand, it emerges that FreeBSD and possibly other packagers
are so wedded to backwards compatibility that they hack the IANA data
to keep the old spelling --- and not just that old spelling, but even
older spellings that IANA used back in the stone age. This caused the
filter logic to fail to suppress "Factory" at all on such platforms,
though the formatting problem is definitely real in that case.
To solve both problems, get rid of the hard-wired assumption about
exactly what Factory's abbreviation is, and instead reject abbreviations
exceeding 31 characters. This will allow Factory to appear in the view
if and only if it's using the modern abbreviation.
In passing, simplify the code we add to zic.c to support "zic -P"
to remove its now-obsolete hacks to not print the Factory zone's
abbreviation. Unlike pg_timezone_names(), there's no reason for
that code to support old/nonstandard timezone data.
Since we generally prefer to keep timezone-related behavior the
same in all branches, and since this is arguably a bug fix,
back-patch to all supported branches.
Discussion: https://postgr.es/m/3961.1564086915@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Money values exceeding about 18 digits (depending on lc_monetary)
could be inaccurately converted to numeric, due to select_div_scale()
deciding it didn't need to compute any fractional digits. Force
its hand by setting the dscale of one division input to equal the
number of fractional digits we need.
In passing, rearrange the logic to not do useless work in locales
where money values are considered integral.
Per bug #15925 from Slawomir Chodnicki. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/15925-da9953e2674bb5c8@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise, after a deleted page gets even older, it becomes unrecyclable
again. B-tree has the same problem, and has had since time immemorial,
but let's at least fix this in GiST, where this is new.
Backpatch to v12, where GiST page deletion was introduced.
Reviewed-by: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
|
|
|
|
|
|
|
| |
I was careless passing a datum directly to DATE_NOT_FINITE without
calling DatumGetDateADT() first.
Backpatch-through: 9.4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The values 'infinity' and '-infinity' are a part of the DATE type
itself, so a bound of the date 'infinity' is not the same as an
unbounded/infinite range. However, it is still wrong to try to
canonicalize such values, because adding or subtracting one has no
effect. Fix by treating 'infinity' and '-infinity' the same as
unbounded ranges for the purposes of canonicalization (but not other
purposes).
Backpatch to all versions because it is inconsistent with the
documented behavior. Note that this could be an incompatibility for
applications relying on the behavior contrary to the documentation.
Author: Laurenz Albe
Reviewed-by: Thomas Munro
Discussion: https://postgr.es/m/77f24ea19ab802bc9bc60ddbb8977ee2d646aec1.camel%40cybertec.at
Backpatch-through: 9.4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 3ca930fc3 modified get_actual_variable_range() to use a new
"SnapshotNonVacuumable" snapshot type for selecting tuples that it
would consider valid. However, because that snapshot type can accept
recently-dead tuples, this caused a bug when using a recently-created
index: we might accept a recently-dead tuple that is an early member
of a broken HOT chain and does not actually match the index entry.
Then, the data extracted from the heap tuple would not necessarily be
an endpoint value of the column; it could even be NULL, leading to
get_actual_variable_range() itself reporting "found unexpected null
value in index". Even without an error, this could lead to poor
plan choices due to an erroneous notion of the endpoint value.
We can improve matters by changing the code to use the index-only
scan technique (which didn't exist when get_actual_variable_range was
originally written). If any of the tuples in a HOT chain are live
enough to satisfy SnapshotNonVacuumable, we take the data from the
index entry, ignoring what is in the heap. This fixes the problem
without changing the live-vs-dead-tuple behavior from what was
intended by commit 3ca930fc3.
A side benefit is that for static tables we might not have to touch
the heap at all (when the extremal value is in an all-visible page).
In addition, we can save some overhead by not having to create a
complete ExecutorState, and we don't need to run FormIndexDatum,
avoiding more cycles as well as the possibility of failure for
indexes on expressions. (I'm not sure that this code would ever
be used to determine the extreme value of an expression, in the
current state of the planner; but it's definitely possible that
lower-order columns of the selected index could be expressions.
So one could construct perhaps-artificial examples in which the
old code unexpectedly failed due to trying to compute an
expression's value for a now-dead row.)
Per report from Manuel Rigger. Back-patch to v11 where commit
3ca930fc3 came in.
Discussion: https://postgr.es/m/CA+u7OA7W4NWEhCvftdV6_8bbm2vgypi5nuxfnSEJQqVKFSUoMg@mail.gmail.com
|
|
|
|
|
| |
pgperltidy and reformat-dat-files too, though the latter didn't
find anything to change.
|
|
|
|
|
| |
Author: Alexander Lakhin
Discussion: https://postgr.es/m/af27d1b3-a128-9d62-46e0-88f424397f44@gmail.com
|