| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VACUUM may truncate heap in several batches. The activity report
is logged for each batch, and contains the number of pages in the table
before and after the truncation, and also the elapsed time during
the truncation. Previously the elapsed time reported in each batch was
the total elapsed time since starting the truncation until finishing
each batch. For example, if the truncation was processed dividing into
three batches, the second batch reported the accumulated time elapsed
during both first and second batches. This is strange and confusing
because the number of pages in the table reported together is not
total. Instead, each batch should report the time elapsed during
only that batch.
The cause of this issue was that the resource usage snapshot was
initialized only at the beginning of the truncation and was never
reset later. This commit fixes the issue by changing VACUUM so that
the resource usage snapshot is reset at each batch.
Back-patch to all supported branches.
Reported-by: Tatsuhito Kasahara
Author: Tatsuhito Kasahara
Reviewed-by: Masahiko Sawada, Fujii Masao
Discussion: https://postgr.es/m/CAP0=ZVJsf=NvQuy+QXQZ7B=ZVLoDV_JzsVC1FRsF1G18i3zMGg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes and updates a couple of comments related to outdated Windows
versions. Particularly, src/common/exec.c had a fallback implementation
to read a file's line from a pipe because stdin/stdout/stderr does not
exist in Windows 2000 that is removed to simplify src/common/ as there
are unlikely versions of Postgres running on such platforms.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Juan José Santamaría Flecha
Discussion: https://postgr.es/m/20191219021526.GC4202@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Manifested as
ERROR: subtransaction logged without previous top-level txn record
this check forbids legit behaviours like
- First xl_xact_assignment record is beyond reading, i.e. earlier
restart_lsn.
- After restart_lsn there is some change of a subxact.
- After that, there is second xl_xact_assignment (for another subxact)
revealing the relationship between top and first subxact.
Such a transaction won't be streamed anyway because we hadn't seen it in
full. Saying for sure whether xact of some record encountered after
the snapshot was deserialized can be streamed or not requires to know
whether it wrote something before deserialization point --if yes, it
hasn't been seen in full and can't be decoded. Snapshot doesn't have such
info, so there is no easy way to relax the check.
Reported-by: Hsu, John
Diagnosed-by: Arseny Sher
Author: Arseny Sher, Amit Kapila
Reviewed-by: Amit Kapila, Dilip Kumar
Backpatch-through: 9.5
Discussion: https://postgr.es/m/AB5978B2-1772-4FEE-A245-74C91704ECB0@amazon.com
|
|
|
|
|
|
| |
btbuild() has nothing to say about how NULL values compare in nbtree.
Besides, there are _bt_compare() header comments that describe how NULL
values are handled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, LOCK TABLE command through a parent table checked
the permissions on not only the parent table but also the children
tables inherited from it. This was a bug and inherited queries should
perform access permission checks on the parent table only. This
commit fixes LOCK TABLE so that it does not check the permission
on children tables.
This patch is applied only in the master branch. We decided not to
back-patch because it's not hard to imagine that there are some
applications expecting the old behavior and the change breaks their
security.
Author: Amit Langote
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CAHGQGwE+GauyG7POtRfRwwthAGwTjPQYdFHR97+LzA4RHGnJxA@mail.gmail.com
|
|
|
|
|
|
| |
Author: Daniel Gustafsson
Reviewed-by: Vik Fearing
Discussion: https://postgr.es/m/EBC3BFEB-664C-4063-81ED-29F1227DB012@yesql.se
|
|
|
|
|
|
|
|
|
|
|
| |
When updating a table row with generated columns, only recompute those
generated columns whose base columns have changed in this update and
keep the rest unchanged. This can result in a significant performance
benefit. The required information was already kept in
RangeTblEntry.extraUpdatedCols; we just have to make use of it.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b05e781a-fa16-6b52-6738-761181204567@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The extraUpdatedCols field of the target RTE records which generated
columns are affected by an update. This is used in a variety of
places, including per-column triggers and foreign data wrappers. When
an update was initiated by a logical replication subscription, this
field was not filled in, so such an update would not affect generated
columns in a way that is consistent with normal updates. To fix,
factor out some code from analyze.c to fill in extraUpdatedCols in the
logical replication worker as well.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b05e781a-fa16-6b52-6738-761181204567@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit also updates wait event enum into alphabetical order.
Previously the enum entry for GSSOpenServer was added out-of-order.
Back-patch to v12 where commit b0b39f72b9 introduced
GSSOpenServer wait event. In v12, the commit doesn't include
the update of wait event enum, not to break ABI.
Author: Fujii Masao
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/949931aa-4ed4-d867-a7b5-de9c02b2292b@oss.nttdata.com
|
|
|
|
|
|
|
| |
Noted by Justin Pryzby, though I chose to just rip out the stale text,
as it's in no way relevant to this particular function.
Discussion: https://postgr.es/m/20200212182337.GZ1412@telsasoft.com
|
|
|
|
|
|
|
|
| |
Make it a bit shorter and better-commented; no functional change.
Alvaro Herrera and Tom Lane
Discussion: https://postgr.es/m/20200212182337.GZ1412@telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Practically everybody who's ever added a column to one of the bootstrap
catalogs has been burnt by the need to update the relnatts field in the
initial pg_class data to match. Now that we use Perl scripts to
generate postgres.bki, we can have the machines take care of that,
by filling the field during genbki.pl.
While at it, use the BKI_DEFAULTS mechanism to eliminate repetitive
specifications of other column values in pg_class.dat, too. They
weren't particularly a maintenance problem, but this way is prettier
(certainly the spotty previous usage of BKI_DEFAULTS wasn't pretty).
No catversion bump needed, since this doesn't actually change the
contents of postgres.bki.
Per gripe from Justin Pryzby, though this is quite different from
his originally proposed solution.
Amit Langote, John Naylor, Tom Lane
Discussion: https://postgr.es/m/20200212182337.GZ1412@telsasoft.com
|
|
|
|
|
|
|
|
| |
The write buffer was already lazily-allocated, so this is more
symmetric. It also means that a freshly-rewound tape (whether for
reading or writing) is not consuming memory for the buffer.
Discussion: https://postgr.es/m/97c46a59c27f3c38e486ca170fcbc618d97ab049.camel%40j-davis.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 6bf0bc842 replaced float.c's CHECKFLOATVAL() macro with static
inline subroutines, but that wasn't too well thought out. In the original
coding, the unlikely condition (isinf(result) or result == 0) was checked
first, and the inf_is_valid or zero_is_valid condition only afterwards.
The inline-subroutine coding caused that to be swapped around, which is
pretty horrid for performance because (a) in common cases the is_valid
condition is twice as expensive to evaluate (e.g., requiring two isinf()
calls not one) and (b) in common cases the is_valid condition is false,
requiring us to perform the unlikely-condition check anyway. Net result
is that one isinf() call becomes two or three, resulting in visible
performance loss as reported by Keisuke Kuroda.
The original fix proposal was to revert the replacement of the macro,
but on second thought, that macro was just a bad idea from the beginning:
if anything it's a net negative for readability of the code. So instead,
let's just open-code all the overflow/underflow tests, being careful to
test the unlikely condition first (and mark it unlikely() to help the
compiler get the point).
Also, rather than having N copies of the actual ereport() calls, collapse
those into out-of-line error subroutines to save some code space. This
does mean that the error file/line numbers won't be very helpful for
figuring out where the issue really is --- but we'd already burned that
bridge by putting the ereports into static inlines.
In HEAD, check_float[48]_val() are gone altogether. In v12, leave them
present in float.h but unused in the core code, just in case some
extension is depending on them.
Emre Hasegeli, with some kibitzing from me and Andres Freund
Discussion: https://postgr.es/m/CANDwggLe1Gc1OrRqvPfGE=kM9K0FSfia0hbeFCEmwabhLz95AA@mail.gmail.com
|
|
|
|
|
| |
These should've been dropped by a8bb8eb58, but evidently were
missed. Not important enough to back-patch.
|
|
|
|
|
|
|
|
| |
This removes some lseek() system calls.
Author: Thomas Munro
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CA%2BhUKGJ%2BoHhnvqjn3%3DHro7xu-YDR8FPr0FL6LF35kHRX%3D_bUzg%40mail.gmail.com
|
|
|
|
|
|
|
|
| |
Commit 4eaea3db introduced TupleHashTableHash(), but the signature
didn't match the other exposed functions. Separate it into internal
and external versions. The external version hides the details behind
an API more consistent with the other external functions, and the
internal version is still suitable for simplehash.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Marking an object as dependant on an extension did not have any
privilege check whatsoever; this allowed any user to mark objects as
droppable by anyone able to DROP EXTENSION, which could be used to cause
system-wide havoc. Disallow by checking that the calling user owns the
mentioned object.
(No constraints are placed on the extension.)
Security: CVE-2020-1720
Reported-by: Tom Lane
Discussion: 31605.1566429043@sss.pgh.pa.us
|
|
|
|
|
|
| |
Reported-by: Justin Pryzby
Author: Justin Pryzby
Discussion: https://postgr.es/m/20200206021432.GA24549@telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit b10714080 moved the GinPageSetDeleteXid() call to a spot where
the "page" variable was pointing to the wrong page, causing the XID
to be inserted on a page that's not being deleted, thus allowing later
GinPageIsRecyclable tests to recycle the deleted page too soon.
It might be a good idea to stop using the single "page" variable for
multiple purposes in this function. But for the moment I just moved
the GinPageSetDeleteXid() call down beside the GinPageSetDeleted()
call, which seems like a more logical place for it anyway.
Back-patch to v11, as the faulty patch was. (Fortunately, the bug
hasn't made it into any release yet.)
Discussion: https://postgr.es/m/21620.1581098806@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On a multi-level partioned table, when adding a partition not directly
connected to the root table, foreign key constraints referencing the
root were not cloned to the new partition, leading to the FK being
possibly inadvertently violated later on.
This was caused by fuzzy thinking in CloneFkReferenced (commit
f56f8f8da6af): it was skipping constraints marked as having parents on
the theory that cloning those would create duplicates; but that's only
correct for the top level of the partitioning hierarchy. For levels
below that one, such constraints must still be considered and only
skipped if later on we see that we'd create duplicates. Apparently, I
(Álvaro) wrote the comments right but the code implemented something
slightly different.
Author: Jehan-Guillaume de Rorthais
Discussion: https://postgr.es/m/20200206004948.238352db@firost
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When running TRUNCATE CASCADE on a child of a partitioned table
referenced by another partitioned table, the truncate was not applied to
partitions of the referencing table; this could leave rows violating the
constraint in the referencing partitioned table. Repair by walking the
pg_constraint chain all the way up to the topmost referencing table.
Note: any partitioned tables containing FKs that reference other
partitioned tables should be checked for possible violating rows, if
TRUNCATE has occurred in partitions of the referenced table.
Reported-by: Christophe Courtois
Author: Jehan-Guillaume de Rorthais
Discussion: https://postgr.es/m/20200204183906.115f693e@firost
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 147e3722f7 changed Tid scan so that it calls table_beginscan()
and uses the scan option for seq scan. This change caused two issues.
(1) The change caused Tid scan to take a predicate lock on the entire
relation in serializable transaction even when relation-level
lock is not necessary. This could lead to an unexpected
serialization error.
(2) The change caused Tid scan to increment the number of seq_scan
in pg_stat_*_tables views even though it's not seq scan. This
could confuse the users.
This commit adds the scan option for Tid scan and makes Tid scan
use it, to avoid those issues.
Back-patch to v12, where the bug was introduced.
Author: Tatsuhito Kasahara
Reviewed-by: Kyotaro Horiguchi, Masahiko Sawada, Fujii Masao
Discussion: https://postgr.es/m/CAP0=ZVKy+gTbFmB6X_UW0pP3WaeJ-fkUWHoD-pExS=at3CY76g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main benefit of doing so is that this allows llvm to ensure that
types match - previously that'd only be detected by a crash within the
called function. There were a number of cases where we passed a
superfluous parameter...
To avoid needing to add all the functions to llvmjit.{c,h}, instead
get them from the llvm module for llvmjit_types.c. Also use that for
the functions from llvmjit_types already in llvmjit.h.
Author: Soumyadeep Chakraborty and Andres Freund
Discussion: https://postgr.es/m/CADwEdooww3wZv-sXSfatzFRwMuwa186LyTwkBfwEW6NjtooBPA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Expose two new entry points: one for only calculating the hash value
of a tuple, and another for looking up a hash entry when the hash
value is already known. This will be useful for disk-based Hash
Aggregation to avoid recomputing the hash value for the same tuple
after saving and restoring it from disk.
Discussion: https://postgr.es/m/37091115219dd522fd9ed67333ee8ed1b7e09443.camel%40j-davis.com
|
|
|
|
|
|
|
|
|
|
| |
This merges the code emission for a number of opcodes by handling the
behavioural difference more locally. This reduces code, and also
improves the generated code a bit in some cases, by removing redundant
constants.
Author: Andres Freund
Discussion: https://postgr.es/m/20191023163849.sosqbfs5yenocez3@alap3.anarazel.de
|
|
|
|
|
|
|
|
| |
Previously we used constant function pointer addresses, which prevents
inlining and other related optimizations.
Author: Andres Freund
Discussion: https://postgr.es/m/20191023163849.sosqbfs5yenocez3@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
| |
It's already tracked via ExprState->parent, so we don't need to also
include it in ExprEvalStep. When that code originally was written
ExprState->parent didn't exist, but it since has been introduced in
6719b238e8f.
Author: Andres Freund
Discussion: https://postgr.es/m/20191023163849.sosqbfs5yenocez3@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
| |
This mostly consists of using C99 style for loops, moving variables
into narrower scopes, and a smattering of other minor improvements.
Done separately to make it easier to review patches with actual
functional changes.
Author: Andres Freund
Discussion: https://postgr.es/m/20191023163849.sosqbfs5yenocez3@alap3.anarazel.de
|
|
|
|
|
| |
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20200206082333.GA95343@nol
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ssl_max_protocol_version"
This reverts commit 41aadee, as the GUC checks could run on older values
with the new values used, and result in incorrect errors if both
parameters are changed at the same time.
Per complaint from Tom Lane.
Discussion: https://postgr.es/m/27574.1581015893@sss.pgh.pa.us
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In certain transient states, it's possible that a table has attributes
with attgenerated set but no default expressions in pg_attrdef yet.
In that case, the old code path would not set
relation->rd_att->constr->has_generated_stored, unless
relation->rd_att->constr was also populated for some other reason.
There was probably no practical impact, but it's better to keep this
consistent.
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/20200115181105.ad6ab6dlgyww3lb6%40alap3.anarazel.de
|
|
|
|
|
|
| |
Consolidate the calculations for hash table size estimation. This will
help with upcoming Hash Aggregation work that will add additional call
sites.
|
|
|
|
|
|
|
|
| |
Previously, the freelist of blocks was tracked as an
occasionally-sorted array. A min heap is more resilient to larger
freelists or more frequent changes between reading and writing.
Discussion: https://postgr.es/m/97c46a59c27f3c38e486ca170fcbc618d97ab049.camel%40j-davis.com
|
|
|
|
|
|
|
| |
Reported-by: Amit Langote
Author: Amit Langote
Backpatch-through: 9.6, where it was introduced
Discussion: https://postgr.es/m/CA+HiwqFNADeukaaGRmTqANbed9Fd81gLi08AWe_F86_942Gspw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously PostgreSQL built with -DLWLOCK_STATS could report
more than one LWLock statistics entries for the same backend
process and the same LWLock. This is strange and only one
statistics should be output in that case, instead.
The cause of this issue is that the key variable used for
LWLock stats hash table was not fully initialized. The key
consists of two fields and they were initialized. But
the following 4 bytes allocated in the key variable for
the alignment was not initialized. So even if the same key
was specified, hash_search(HASH_ENTER) could not find
the existing entry for that key and created new one.
This commit fixes this issue by initializing the key
variable with zero. As the side effect of this commit,
the volume of LWLock statistics output would be reduced
very much.
Back-patch to v10, where commit 3761fe3c20 introduced the issue.
Author: Fujii Masao
Reviewed-by: Julien Rouhaud, Kyotaro Horiguchi
Discussion: https://postgr.es/m/26359edb-798a-568f-d93a-6aafac49752d@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new field tracks the PID of the group leader used with parallel
query. For parallel workers and the leader, the value is set to the
PID of the group leader. So, for the group leader, the value is the
same as its own PID. Note that this reflects what PGPROC stores in
shared memory, so as leader_pid is NULL if a backend has never been
involved in parallel query. If the backend is using parallel query or
has used it at least once, the value is set until the backend exits.
Author: Julien Rouhaud
Reviewed-by: Sergei Kornilov, Guillaume Lelarge, Michael Paquier, Tomas
Vondra
Discussion: https://postgr.es/m/CAOBaU_Yy5bt0vTPZ2_LUM6cUcGeqmYNoJ8-Rgto+c2+w3defYA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tuple conversion incorrectly concluded that no conversion was needed
as long as all the attributes lined up. But if the source tuple has a
missing attribute (from addition of a column with default), then the
destination tupdesc might not reflect the same default. The typical
symptom was that the affected columns would be unexpectedly NULL.
Repair by always forcing conversion if the source has missing
attributes, which will be filled in by the deform operation. (In
theory we could optimize for when the destination has the same
default, but that seemed overkill.)
Backpatch to 11 where missing attributes were added.
Per bug #16242.
Vik Fearing (discovery, code, testing) and me (analysis, testcase).
Discussion: https://postgr.es/m/16242-d1c9fca28445966b@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using 32 bit counters means they can now realistically wrap around when
vacuuming extremely large tables. Because they're signed integers,
stats printed by vacuum look very odd when they do.
We'd love to backpatch this, but refrain because the variables are
exported and could cause third-party code to break.
Reviewed-by: Julien Rouhaud, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20200131205926.GA16367@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
| |
Use kevent(2) to wait for events on the BSD family of operating
systems and macOS. This is similar to the epoll(2) support added
for Linux by commit 98a64d0bd.
Author: Thomas Munro
Reviewed-by: Andres Freund, Marko Tiikkaja, Tom Lane
Tested-by: Mateusz Guzik, Matteo Beccati, Keith Fiske, Heikki Linnakangas, Michael Paquier, Peter Eisentraut, Rui DeSousa, Tom Lane, Mark Wong
Discussion: https://postgr.es/m/CAEepm%3D37oF84-iXDTQ9MrGjENwVGds%2B5zTr38ca73kWR7ez_tA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Commit 74618e77 added a new check intended to fix a bug, but put
it in the wrong place so that parallel btree build was always
disabled. Do the check after we've actually tried to create
a DSM segment. Back-patch to 11, like the earlier commit.
Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-WzmDABkJzrNnvf%2BOULK-_A_j9gkYg_Dz-H62jzNv4eKQTw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 499be013d added this field in a rather poorly-thought-through
manner, with the result being that rather than being a field of the
Append or MergeAppend plan node as intended (and as it seems to be,
in text format), it was actually an element of the "Plans" subgroup.
At least in JSON format, that's flat out invalid syntax, because
"Plans" is an array not an object.
While it's not hard to move the generation of the field so that it
appears where it's supposed to, this does result in a visible change
in field order in text format, in cases where a Append or MergeAppend
plan node has any InitPlans attached. That's slightly annoying to
do in stable branches; but the alternative of continuing to emit
broken non-text formats seems worse.
Also, since the set of fields emitted is not supposed to be
data-dependent in non-text formats, make sure that "Subplans Removed"
appears in Append and MergeAppend nodes even when it's zero, in those
formats. (The previous coding made it look like it could appear in
some other node types such as BitmapAnd, but we don't actually support
runtime pruning there, so don't emit it in those cases.)
Per bug #16171 from Mahadevan Ramachandran. Fix by Daniel Gustafsson
and Tom Lane, reviewed by Hamid Akhtar. Back-patch to v11 where this
code came in.
Discussion: https://postgr.es/m/16171-b72259ab75505fa2@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When replica identity is FULL (an admittedly unusual case), the loop
that searches for tuples in execReplication.c didn't stop scanning the
table when once a matching tuple was found. Add the missing 'break'.
Note slight behavior change: we now return the first matching tuple
rather than the last one. They are supposed to be indistinguishable
anyway, so this shouldn't matter.
Author: Konstantin Knizhnik
Discussion: https://postgr.es/m/379743f6-ae91-b866-f7a2-5624e6d2b0a4@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
| |
Those new assertions can be used at file scope, outside of any function
for compilation checks. This commit provides implementations for C and
C++, and fallback implementations.
Author: Peter Smith
Reviewed-by: Andres Freund, Kyotaro Horiguchi, Dagfinn Ilmari Mannsåker,
Michael Paquier
Discussion: https://postgr.es/m/201DD0641B056142AC8C6645EC1B5F62014B8E8030@SYD1217
|
|
|
|
|
|
|
|
|
|
| |
Using a lookup table of digit pairs reduces the number of divisions
needed, and calculating the length upfront saves some work; these
ideas are taken from the code previously committed for floats.
David Fetter, reviewed by Kyotaro Horiguchi, Tels, and me.
Discussion: https://postgr.es/m/20190924052620.GP31596%40fetter.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we attempt to create a DSM segment when no slots are available,
we should return the memory to the operating system. Previously
we did that if the DSM_CREATE_NULL_IF_MAXSEGMENTS flag was
passed in, but we didn't do it if an error was raised. Repair.
Back-patch to 9.4, where DSM segments arrived.
Author: Thomas Munro
Reviewed-by: Robert Haas
Reported-by: Julian Backes
Discussion: https://postgr.es/m/CA%2BhUKGKAAoEw-R4om0d2YM4eqT1eGEi6%3DQot-3ceDR-SLiWVDw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This code would accept "strinX", where X is any 1-byte character,
as meaning "string". Clearly it wasn't meant to do that.
No back-patch, since this doesn't affect correct queries and
there's some tiny chance we'd break somebody's incorrect query
in a minor release.
Report and patch by Dominik Czarnota.
Discussion: https://postgr.es/m/CABEVAa1dU0mDCAfaT8WF2adVXTDsLVJy_izotg6ze_hh-cn8qQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit fc7695891 changed CheckAttributeType to recurse into ranges,
but made it pass down the wrong collation (always InvalidOid, since
ranges as such have no collation). This would result in guaranteed
failure when considering a range type whose subtype is collatable.
Embarrassingly, we lack any regression tests that would expose such
a problem (but fortunately, somebody noticed before we shipped this
bug in any release).
Fix it to pass down the range's subtype collation property instead,
and add some regression test cases to exercise collatable-subtype
ranges a bit more. Back-patch to all supported branches, as the
previous patch was.
Report and patch by Julien Rouhaud, test cases tweaked by me
Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
|
|
|
|
| |
This might help clarify the API a bit.
|
|
|
|
|
|
|
|
|
| |
When allocating DSM segments with posix_fallocate() on Linux (see commit
899bd785), report this activity as a wait event exactly as we would if
we were using file-backed DSM rather than shm_open()-backed DSM.
Author: Thomas Munro
Discussion: https://postgr.es/m/CA%2BhUKGKCSh4GARZrJrQZwqs5SYp0xDMRr9Bvb%2BHQzJKvRgL6ZA%40mail.gmail.com
|