| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
| |
Previously spilled units greater than months were truncated to months.
Also document the spill behavior.
Reported-by: Bryn Llewelly
Discussion: https://postgr.es/m/BDAE4B56-3337-45A2-AC8A-30593849D6C0@yugabyte.com
Backpatch-through: master
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a couple of related problems that arise when raising
numbers to very large powers.
Firstly, when raising a negative number to a very large integer power,
the result should be well-defined, but the previous code would only
cope if the exponent was small enough to go through power_var_int().
Otherwise it would throw an internal error, attempting to take the
logarithm of a negative number. Fix this by adding suitable handling
to the general case in power_var() to cope with negative bases,
checking for integer powers there.
Next, when raising a (positive or negative) number whose absolute
value is slightly less than 1 to a very large power, the result should
approach zero as the power is increased. However, in some cases, for
sufficiently large powers, this would lose all precision and return 1
instead of 0. This was due to the way that the local_rscale was being
calculated for the final full-precision calculation:
local_rscale = rscale + (int) val - ln_dweight + 8
The first two terms on the right hand side are meant to give the
number of significant digits required in the result ("val" being the
estimated result weight). However, this failed to account for the fact
that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE
(1000), and the result weight might be less then -1000, causing their
sum to be negative, leading to a loss of precision. Fix this by
forcing the number of significant digits calculated to be nonnegative.
It's OK for it to be zero (when the result weight is less than -1000),
since the local_rscale value then includes a few extra digits to
ensure an accurate result.
Finally, add additional underflow checks to exp_var() and power_var(),
so that they consistently return zero for cases like this where the
result is indistinguishable from zero. Some paths through this code
already returned zero in such cases, but others were throwing overflow
errors.
Dean Rasheed, reviewed by Yugo Nagata.
Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6Su0bCR0w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
It's not clear what the semantics of negative strides would be, so throw
an error instead.
Per report from Bauyrzhan Sakhariyev
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAKpL73vZmLuFVuwF26FJ%2BNk11PVHhAnQRoREFcA03x7znRoFvA%40mail.gmail.com
Backpatch to v14
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes the behavior of examining the pg_file_settings view after
changing a config option that requires restart. The user needs to know
that any change of such options does not take effect until a restart,
and this worked correctly if the line is edited without removing it.
However, for the case where the line is removed altogether, the flag
doesn't get set, because a flag was only set in set_config_option, but
that's not called for lines removed. Repair.
(Ref.: commits 62d16c7fc561 and a486e35706ea)
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/202107262302.xsfdfc5sb7sh@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch causes EXPLAIN output to refer to objects that are in
the current session's temp schema with the "pg_temp" schema alias
rather than that schema's actual name. This is useful for our own
testing purposes since it will stabilize EXPLAIN VERBOSE output
for such cases, allowing us to use that in regression tests.
It should be less confusing for end users too.
Since ruleutils.c needs to change behavior for this, the change
also leaks into a few other users of ruleutils.c, for example
pg_get_viewdef(). AFAICS that won't cause any problems.
We did find that aggressively trying to change this behavior
across-the-board would cause issues, but as long as "pg_temp"
only appears within generated SQL text, I think it'll be fine.
Along the way, make get_namespace_name_or_temp conform to the
same API as get_namespace_name, ie that it returns a palloc'd
string or NULL. The current behavior hasn't caused any bugs
since no callers attempt to pfree the result, but if it gets
more widespread usage that could become a problem.
Amul Sul, reviewed and extended by me
Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Formerly, when specifying NUMERIC(precision, scale), the scale had to
be in the range [0, precision], which was per SQL spec. This commit
extends the range of allowed scales to [-1000, 1000], independent of
the precision (whose valid range remains [1, 1000]).
A negative scale implies rounding before the decimal point. For
example, a column might be declared with a scale of -3 to round values
to the nearest thousand. Note that the display scale remains
non-negative, so in this case the display scale will be zero, and all
digits before the decimal point will be displayed.
A scale greater than the precision supports fractional values with
zeros immediately after the decimal point.
Take the opportunity to tidy up the code that packs, unpacks and
validates the contents of a typmod integer, encapsulating it in a
small set of new inline functions.
Bump the catversion because the allowed contents of atttypmod have
changed for numeric columns. This isn't a change that requires a
re-initdb, but negative scale values in the typmod would confuse old
backends.
Dean Rasheed, with additional improvements by Tom Lane. Reviewed by
Tom Lane.
Discussion: https://postgr.es/m/CAEZATCWdNLgpKihmURF8nfofP0RFtAKJ7ktY6GcZOPnMfUoRqA@mail.gmail.com
|
|
|
|
|
|
| |
Bauyrzhan Sakhariyev, via Github
Backpatch to v14
|
|
|
|
|
| |
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/20210716.170209.175434392011070182.horikyota.ntt%40gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In postgresql.conf, memory and file size GUCs can be specified with "B"
(bytes) as of b06d8e58b. Likewise, time GUCs can be specified with "us"
(microseconds) as of caf626b2c. Update postgres.conf.sample to reflect
that fact.
Pavel Luzanov
Backpatch to v12, which is the earliest version that allows both of
these units. A separate commit will document the "B" case for v11.
Discussion: https://www.postgresql.org/message-id/flat/f10d16fc-8fa0-1b3c-7371-cb3a35a13b7a%40postgrespro.ru
|
|
|
|
|
|
|
|
|
|
| |
Oversight in ea1b99a66
Yukun Wang
Backpatch to v14 where this parameter was introduced
Discussion: https://www.postgresql.org/message-id/flat/OS0PR01MB6003FCEFF0201EF21685FD33B4E39%40OS0PR01MB6003.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
| |
It doesn't match the normal coding style.
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/flat/22016aa9-ca59-15c7-01df-f292cb558c4d@enterprisedb.com
|
|
|
|
|
|
|
|
| |
This just makes the progress output easier to follow.
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/flat/22016aa9-ca59-15c7-01df-f292cb558c4d@enterprisedb.com
|
|
|
|
|
|
|
| |
Instead of castNode(…, lfoo(…))
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://www.postgresql.org/message-id/flat/87eecahraj.fsf@wibble.ilmari.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It has been spotted that multiranges lack of ability to decompose them into
individual ranges. Subscription and proper expanded object representation
require substantial work, and it's too late for v14. This commit
provides the implementation of unnest(multirange), which is quite trivial.
unnest(multirange) is defined as a polymorphic procedure.
Catversion is bumped.
Reported-by: Jonathan S. Katz
Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org
Author: Alexander Korotkov
Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu, Tom Lane
Reviewed-by: Alvaro Herrera
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 0563a3a8b changed how partition constraints were generated such
that this function no longer computes the mapping of parent attnos to
child attnos.
This is an external function that extensions could use, so this is
potentially a breaking change. No external callers are known, however,
and this will make it simpler to write such callers in the future.
Author: Hou Zhijie
Reviewed-by: David Rowley, Michael Paquier, Soumyadeep Chakraborty
Discussion: https://www.postgresql.org/message-id/flat/OS0PR01MB5716A75A45BE46101A1B489894379@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
| |
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough. That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize". People seem to like "Memoize", so let's do the rename.
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
|
|
|
|
|
|
|
|
| |
The name introduced by commit 4656e3d66 was agreed to be unreasonably
long. To match this change, rename initdb's recently-added
--clobber-cache option to --discard-caches.
Discussion: https://postgr.es/m/1374320.1625430433@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
| |
41469253e went to the trouble of removing a theoretical bug from
free_sort_tuple by checking if the tuple was NULL before freeing it. Let's
make this a little more robust by also setting the tuple to NULL so that
should we be called again we won't end up doing a pfree on the already
pfree'd tuple. Per advice from Tom Lane.
Discussion: https://postgr.es/m/3188192.1626136953@sss.pgh.pa.us
Backpatch-through: 9.6, same as 41469253e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a theoretical bug in tuplesort.c which, if a bounded sort was
used in combination with a byval Datum sort (tuplesort_begin_datum), when
switching the sort to a bounded heap in make_bounded_heap(), we'd call
free_sort_tuple(). The problem was that when sorting Datums of a byval
type, the tuple is NULL and free_sort_tuple() would free the memory for it
regardless of that. This would result in a crash.
Here we fix that simply by adding a check to see if the tuple is NULL
before trying to disassociate and free any memory belonging to it.
The reason this bug is only theoretical is that nowhere in the current
code base do we do tuplesort_set_bound() when performing a Datum sort.
However, let's backpatch a fix for this as if any extension uses the code
in this way then it's likely to cause problems.
Author: Ronan Dunklau
Discussion: https://postgr.es/m/CAApHDvpdoqNC5FjDb3KUTSMs5dg6f+XxH4Bg_dVcLi8UYAG3EQ@mail.gmail.com
Backpatch-through: 9.6, oldest supported version
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an error occurred in the wrong place, it was possible to leave an
unintialized entry in the hash table, leading to a crash. Fixed.
Also, be more careful about the order of operations so that an
allocation error doesn't leak memory in CacheMemoryContext or
unnecessarily advance NextRecordTypmod.
Backpatch through version 11. Earlier versions (prior to 35ea75632a5)
do not exhibit the problem, because an uninitialized hash entry
contains a valid empty list.
Author: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/HE1PR8303MB009069D476225B9A9E194B8891779@HE1PR8303MB0090.EURPRD83.prod.outlook.com
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
| |
This fixes an overflow error when using the numeric * operator if the
result has more than 16383 digits after the decimal point by rounding
the result. Overflow errors should only occur if the result has too
many digits *before* the decimal point.
Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was talk about adding units all the way up to yottabytes but it
seems quite far-fetched that anyone would need those. Since such large
units are not exactly commonplace, it seems unlikely that having
pg_size_pretty outputting unit any larger than petabytes would actually be
helpful to anyone.
Since petabytes are on the horizon, let's just add those only. Maybe one
day we'll get to add additional units, but it will likely be a while
before we'll need to think beyond petabytes in regards to the size of a
database.
Author: David Christensen
Discussion: https://postgr.es/m/CAOxo6XKmHc_WZip-x5QwaOqFEiCq_SVD0B7sbTZQk+qqcn2qaw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've grown 2 versions of pg_size_pretty over the years, one for BIGINT
and one for NUMERIC. Both should output the same, but keeping them in
sync is harder than needed due to neither function sharing a source of
truth about which units to use and how to transition to the next largest
unit.
Here we add a static array which defines the units that we recognize and
have both pg_size_pretty and pg_size_pretty_numeric use it. This will
make adding any units in the future a very simple task.
The table contains all information required to allow us to also modify
pg_size_bytes to use the lookup table, so adjust that too.
There are no behavioral changes here.
Author: David Rowley
Reviewed-by: Dean Rasheed, Tom Lane, David Christensen
Discussion: https://postgr.es/m/CAApHDvru1F7qsEVL-iOHeezJ+5WVxXnyD_Jo9nht+Eh85ekK-Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to how pg_size_pretty(bigint) was implemented, it's possible that when
given a negative number of bytes that the returning value would not match
the equivalent positive return value when given the equivalent positive
number of bytes. This was due to two separate issues.
1. The function used bit shifting to convert the number of bytes into
larger units. The rounding performed by bit shifting is not the same as
dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two
operations are only equivalent with positive numbers.
2. The half_rounded() macro rounded towards positive infinity. This meant
that negative numbers rounded towards zero and positive numbers rounded
away from zero.
Here we fix #1 by dividing the values instead of bit shifting. We fix #2
by adjusting the half_rounded macro always to round away from zero.
Additionally, adjust the pg_size_pretty(numeric) function to be more
explicit that it's using division rather than bit shifting. A casual
observer might have believed bit shifting was used due to a static
function being named numeric_shift_right. However, that function was
calculating the divisor from the number of bits and performed division.
Here we make that more clear. This change is just cosmetic and does not
affect the return value of the numeric version of the function.
Here we also add a set of regression tests both versions of
pg_size_pretty() which test the values directly before and after the
function switches to the next unit.
This bug was introduced in 8a1fab36a. Prior to that negative values were
always displayed in bytes.
Author: Dean Rasheed, David Rowley
Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com
Backpatch-through: 9.6, where the bug was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 03ffc4d6d added logic to bypass all caching behavior in
LookupOpclassInfo when CLOBBER_CACHE_ALWAYS is enabled. It doesn't
look like I stopped to think much about what that would cost, but
recent investigation shows that the cost is enormous: it roughly
doubles the time needed for cache-clobber test runs.
There does seem to be value in this behavior when trying to test
the opclass-cache loading logic itself, but for other purposes the
cost is excessive. Hence, let's back off to doing this only when
debug_invalidate_system_caches_always is at least 3; or in older
branches, when CLOBBER_CACHE_RECURSIVELY is defined.
While here, clean up some other minor issues in LookupOpclassInfo.
Re-order the code so we aren't left with broken cache entries (leading
to later core dumps) in the unlikely case that we suffer OOM while
trying to allocate space for a new entry. (That seems to be my
oversight in 03ffc4d6d.) Also, in >= v13, stop allocating one array
entry too many. That's evidently left over from sloppy reversion in
851b14b0c.
Back-patch to all supported branches, mainly to reduce the runtime
of cache-clobbering buildfarm animals.
Discussion: https://postgr.es/m/1370856.1625428625@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Formerly various numeric aggregate functions supported parallel
aggregation by having each worker convert partial aggregate values to
Numeric and use numeric_send() as part of serializing their state.
That's problematic, since the range of Numeric is smaller than that of
NumericVar, so it's possible for it to overflow (on either side of the
decimal point) in cases that would succeed in non-parallel mode.
Fix by serializing NumericVars instead, to avoid the overflow risk and
ensure that parallel and non-parallel modes work the same.
A side benefit is that this improves the efficiency of the
serialization/deserialization code, which can make a noticeable
difference to performance with large numbers of parallel workers.
No back-patch due to risk from changing the binary format of the
aggregate serialization states, as well as lack of prior field
complaints and low probability of such overflows in practice.
Patch by me. Thanks to David Rowley for review and performance
testing, and Ranier Vilela for an additional suggestion.
Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has the advantage to make a process more responsive when the
postmaster dies, even if the wait time was rather limited as there was
only a 50ms timeout here. Another advantage of this change is for
monitoring, as we gain a new wait event for the end-of-vacuum
truncation.
Author: Bharath Rupireddy
Reviewed-by: Aleksander Alekseev, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACU4AdPCq6NLfcA-ZGwX7pPCK5FgEj-CAU0xCKzkASSy_A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several places were performing a tight loop to determine the first power
of 2 number that's > or >= the required memory. Instead of using a loop
for that, we can use pg_nextpower2_32 or pg_nextpower2_64. When we need a
power of 2 number equal to or greater than a given amount, we just pass
the amount to the nextpower2 function. When we need a power of 2 greater
than the amount, we just pass the amount + 1.
Additionally, in tsearch there were a couple of locations that were
performing a while loop when a simple "if" would have done. In both of
these locations only 1 item is being added, so the loop could only have
ever iterated once. Changing the loop into an if statement makes the code
very slightly more optimal as the condition is checked once rather than
twice.
There are quite a few remaining locations that increase the size of the
buffer in the following form:
while (reqsize >= buflen)
{
buflen *= 2;
buf = repalloc(buf, buflen);
}
These are not touched in this commit. repalloc will error out for sizes
larger than MaxAllocSize. Changing these to use pg_nextpower2_32 would
remove the chance of that error being raised. It's unclear from the code
if the sizes could ever become that large, so err on the side of caution.
Discussion: https://postgr.es/m/CAApHDvp=tns7RL4PH0ZR0M+M-YFLquK7218x=0B_zO+DbOma+w@mail.gmail.com
Reviewed-by: Zhihong Yu
|
|
|
|
|
|
|
|
|
|
| |
A few variables have been using 10 as a magic constant while
PG_STAT_GET_REPLICATION_SLOT_COLS can be used instead.
Author: Masahiko Sawada
Reviewed-By: Amit Kapila
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/CAD21AoBvqODDfmD17DkEuPCvV2KbruukXQ2Vwrv5Xi-TsAsTJA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix enumeration of the multirange operators in calc_multirangesel() and
calc_multirangesel() switches.
* Add more regression tests for matching to empty ranges/multiranges.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/c5269c65-f967-77c5-ff7c-15e621c47f6a%40gmail.com
Author: Alexander Korotkov
Backpatch-through: 14, where multiranges were introduced
|
|
|
|
|
|
|
|
|
| |
Instead of defining symbols such as AmOidIndexId explicitly, include
them as an argument of DECLARE_INDEX() and have genbki.pl generate the
way as the table OID symbols from the CATALOG() declaration.
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/ccef1e46-a404-25b1-9b4c-85f2c08e1f28%40enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic is implemented so as there can be a choice in the compression
used when building a WAL record, and an extra per-record bit is used to
track down if a block is compressed with PGLZ, LZ4 or nothing.
wal_compression, the existing parameter, is changed to an enum with
support for the following backward-compatible values:
- "off", the default, to not use compression.
- "pglz" or "on", to compress FPWs with PGLZ.
- "lz4", the new mode, to compress FPWs with LZ4.
Benchmarking has showed that LZ4 outclasses easily PGLZ. ZSTD would be
also an interesting choice, but going just with LZ4 for now makes the
patch minimalistic as toast compression is already able to use LZ4, so
there is no need to worry about any build-related needs for this
implementation.
Author: Andrey Borodin, Justin Pryzby
Reviewed-by: Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/3037310D-ECB7-4BF1-AF20-01C10BB33A33@yandex-team.ru
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
We were using RelationGetIndexList() to update the relation's replica
identity index but instead, we can directly use RelationGetReplicaIndex()
which uses the same functionality. This is a minor code readability
improvement.
Author: Japin Li
Reviewed-By: Takamichi Osumi, Amit Kapila
Discussion: https://postgr.es/m/4C99A862-69C8-431F-960A-81B1151F1B89@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The setting has no effect except during startup. It's still nice to be
able to change it dynamically, which is expected to be pretty useful to
an admin following crash recovery when restarting the cluster is not so
appealing.
Per discussions following commits 2941138e6 and 61752afb2.
Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20210529192321.GM2082%40telsasoft.com
|
|
|
|
|
|
| |
Take some untranslatable things out of the message and replace by
format placeholders, to reduce translatable strings and reduce
translation mistakes.
|
|
|
|
|
|
|
|
|
|
|
| |
The main goal of this option is to allow inspecting temporary files for
debugging purposes, so moving the parameter there is natural.
Oversight in cd91de0.
Reported-by: Justin Pryzby
Author: Euler Taveira
Discussion: https://postgr.es/m/20210612004347.GP16435@telsasoft.com
|
|
|
|
|
|
|
|
| |
In previous commit, I missed that relmap_redo() was also not acquiring the
RelationMappingLock. Thanks to Thomas Munro for pointing that out.
Backpatch-through: 9.6, like previous commit.
Discussion: https://www.postgresql.org/message-id/CA%2BhUKGLev%3DPpOSaL3WRZgOvgk217et%2BbxeJcRr4eR-NttP1F6Q%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
Contrary to the comment here, POSIX does not guarantee atomicity of a
read(), if another process calls write() concurrently. Or at least Linux
does not. Add locking to load_relmap_file() to avoid the race condition.
Fixes bug #17064. Thanks to Alexander Lakhin for the report and test case.
Backpatch-through: 9.6, all supported versions.
Discussion: https://www.postgresql.org/message-id/17064-bb0d7904ef72add3@postgresql.org
|
|
|
|
|
|
|
|
|
|
| |
Commit e7eea52b2d has introduced a new function
RelationGetIdentityKeyBitmap which omits to handle the case where there is
no replica identity index on a relation.
Author: Mark Dilger
Reviewed-by: Takamichi Osumi, Amit Kapila
Discussion: https://www.postgresql.org/message-id/4C99A862-69C8-431F-960A-81B1151F1B89@enterprisedb.com
|
|
|
|
|
| |
Reported-by: Tom Lane
Discussion: https://postgr.es/m/CAPpHfdvcnw3x7jdV3r52p4%3D5S4WUxBCzcQKB3JukQHoicv1LSQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It has been spotted that multiranges lack of ability to decompose them into
individual ranges. Subscription and proper expanded object representation
require substantial work, and it's too late for v14. This commit
provides the implementation of unnest(multirange) and cast multirange as
an array of ranges, which is quite trivial.
unnest(multirange) is defined as a polymorphic procedure. The catalog
description of the cast underlying procedure is duplicated for each multirange
type because we don't have anyrangearray polymorphic type to use here.
Catversion is bumped.
Reported-by: Jonathan S. Katz
Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org
Author: Alexander Korotkov
Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, a zero value for the relfilenode resulted in
a confusing error message about "unexpected duplicate".
This function returns NULL for other invalid relfilenode
values, so zero should be treated likewise.
It's been like this all along, so back-patch to all supported
branches.
Justin Pryzby
Discussion: https://postgr.es/m/20210612023324.GT16435@telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The use of this function is limited to superusers and the code includes
a hardcoded check for that. However, the code would look for the PGPROC
entry to signal for the memory dump before checking if the user is a
superuser or not, which does not make sense if we know that an error
will be returned. Note that the code would let one know if a process
was a PostgreSQL process or not even for non-authorized users, which is
not the case now, but this avoids taking ProcArrayLock that will most
likely finish by being unnecessary.
Thanks to Julien Rouhaud and Tom Lane for the discussion.
Discussion: https://postgr.es/m/YLxw1uVGIAP5uMPl@paquier.xyz
|
|
|
|
|
|
|
| |
A few patches committed after ca3b37487 mistakenly forgot to make the
copyright year 2021. Fix these.
Discussion: https://postgr.es/m/CAApHDvqyLmd9P2oBQYJ=DbrV8QwyPRdmXtCTFYPE08h+ip0UJw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Commit 3db826bd5 disallowed this case, but it turns out that some
people are depending on it. Since the core grammar has allowed
it since 3dc37cd8d, it seems like this code should fall in line.
Per bug #17045 from Robert Sosinski.
Discussion: https://postgr.es/m/17045-6a4a9f0d1513f72b@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This case should be disallowed, just as FOR UPDATE with a plain
GROUP BY is disallowed; FOR UPDATE only makes sense when each row
of the query result can be identified with a single table row.
However, we missed teaching CheckSelectLocking() to check
groupingSets as well as groupClause, so that it would allow
degenerate grouping sets. That resulted in a bad plan and
a null-pointer dereference in the executor.
Looking around for other instances of the same bug, the only one
I found was in examine_simple_variable(). That'd just lead to
silly estimates, but it should be fixed too.
Per private report from Yaoguang Chen.
Back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Braces were referred in some error messages as only brackets (not curly
brackets or curly braces), which can be confusing as other types of
brackets could be used.
While on it, add one test to check after the case of junk characters
detected after a right brace.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210514.153153.1814935914483287479.horikyota.ntt@gmail.com
|