| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The parameter 'rel' in lookup_var_attr_stats was once used to draw an
ERROR when ANALYZE failed to acquire sufficient data to build extended
statistics. bf2a691e0 changed the logic to raise a WARNING in the
caller instead. As a result, this parameter is no longer needed and
can be removed. Since this is a static function, we can always easily
reintroduce the parameter if it's ever needed in the future.
Author: Ilia Evdokimov
Reviewed-by: FabrÃzio de Royes Mello
Discussion: https://postgr.es/m/b3880f22-5808-4206-88d4-1553a81c3440@tantorlabs.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit introduces a new parameter named
autovacuum_worker_slots that controls how many autovacuum worker
slots to reserve during server startup. Modifying this new
parameter's value does require a server restart, but it should
typically be set to the upper bound of what you might realistically
need to set autovacuum_max_workers. With that new parameter in
place, autovacuum_max_workers can now be changed with a SIGHUP
(e.g., pg_ctl reload).
If autovacuum_max_workers is set higher than
autovacuum_worker_slots, a WARNING is emitted, and the server will
only start up to autovacuum_worker_slots workers at a given time.
If autovacuum_max_workers is set to a value less than the number of
currently-running autovacuum workers, the existing workers will
continue running, but no new workers will be started until the
number of running autovacuum workers drops below
autovacuum_max_workers.
Reviewed-by: Sami Imseih, Justin Pryzby, Robert Haas, Andres Freund, Yogesh Sharma
Discussion: https://postgr.es/m/20240410212344.GA1824549%40nathanxps13
|
|
|
|
|
|
|
| |
These are also present in procnumber.h
Reported-by: Peter Eisentraut
Discussion: https://www.postgresql.org/message-id/bd04d675-4672-4f87-800a-eb5d470c15fc@eisentraut.org
|
|
|
|
|
|
|
|
| |
Replace #define YY_EXTRA_TYPE with %option extra-type. The latter is
the way recommended by the flex manual (available since flex 2.5.34).
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/flat/eb6faeac-2a8a-4b69-9189-c33c520e5b7b@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, it would not have worked for a caller to pass a slab
context, since it would have been used for other things which likely
had incompatible size. In an attempt to be helpful and avoid possible
space wastage due to aset's power-of-two rounding, RT_CREATE would
create an additional slab context if the value type was fixed-length
and larger than pointer size. The problem was, we have since added
the bump context type, and the generation context was a possibility as
well, so silently overriding the caller's choice may actually be worse.
Commit e8a6f1f908d arranged so that the caller-provided context is
used only for leaves, so it's safe for the caller to use slab here
if they wish. As demonstration, use slab in one of the radix tree
regression tests.
Reviewed by Masahiko Sawada
Discussion: https://postgr.es/m/CANWCAZZDCo4k5oURg_pPxM6+WZ1oiG=sqgjmQiELuyP0Vtrwig@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, this was notionally used only for the entry point of the
tree and as a convenient parent for other contexts.
For shared memory, the creator previously allocated the entry point
in this context, but attaching backends didn't have access to that,
so they just used the caller's context. For the sake of consistency,
allocate every instance of an entry point in the caller's context.
For local memory, allocate the control object in the caller's context
as well. This commit also makes the "leaf context" the notional parent
of the child contexts used for nodes, so it's a bit of a misnomer,
but a future commit will make the node contexts independent rather
than children, so leave it this way for now to avoid code churn.
The memory context parameter for RT_CREATE is now unused in the case
of shared memory, so remove it and adjust callers to match.
In passing, remove unused "context" member from struct TidStore,
which seems to have been an oversight.
Reviewed by Masahiko Sawada
Discussion: https://postgr.es/m/CANWCAZZDCo4k5oURg_pPxM6+WZ1oiG=sqgjmQiELuyP0Vtrwig@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Typically only one iterator is present at any time, so it's overkill
to devote an entire context for this. Get rid of it and use the
caller's context.
This is tidy-up work, so no backpatch in this form. However, a
hypothetical extension to v17 that tried to start iteration from
an attaching backend would result in a crash, so that'll be fixed
separately in a way that doesn't change behavior in core.
Patch by me, reported and reviewed by Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoBB2U47V=F+wQRB1bERov_of5=BOZGaybjaV8FLQyqG3Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Slightly faulty logic in the original jsonb code (commit d9134d0a355)
results in an empty top level array sorting less than a json null. We
can't change the sort order now since it would affect btree indexes over
jsonb, so document the anomaly.
Backpatch to all live branches (13 .. 17)
In master, also add a code comment noting the anomaly.
Reported-by: Yan Chengpen
Reviewed-by: Jian He
Discussion: https://postgr.es/m/OSBPR01MB45199DD8DA2D1CECD50518188E272@OSBPR01MB4519.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When looking up statistical data about an expression, we do not need
to concern ourselves with the outer joins that could null the
Vars/PHVs contained in the expression. Accounting for nullingrels in
the expression could cause estimate_num_groups to count the same Var
multiple times if it's marked with different nullingrels. This is
incorrect, and could lead to "ERROR: corrupt MVNDistinct entry" when
searching for multivariate n-distinct.
Furthermore, the nullingrels could prevent us from matching an
expression to expressional index columns or to the expressions in
extended statistics, leading to inaccurate estimates.
To fix, strip out all the nullingrels from the expression before we
look up statistical data about it. There is one ensuing plan change
in the regression tests, but it looks reasonable and does not
compromise its original purpose.
This patch could result in plan changes, but it fixes an actual bug,
so back-patch to v16 where the outer-join-aware-Var infrastructure was
introduced.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-2Z4k+nFTiZe0Qbu5n8juUWenDAtMzi98bAZQtwHx0-w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CHUNKHDRSZ was defined as 16 bytes, which was true when that code went in,
but since c6e0fe1f2, 8 is a more accurate value. Here we adjust it to use
sizeof(MemoryChunk), which is normally 8, or 16 for cassert builds.
c6e0fe1f2 first appeared in v16, so this is technically wrong in v16 up
to master, but let's apply this only to master as adjusting this does
influence the estimated number of batches in the aggregate costing code
and we don't want to cause plan instability in released versions.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpMpRQvsTqZo3FinXkgytwxwF8sCyZm83xDj-1s_hLe+w@mail.gmail.com
|
|
|
|
|
| |
Author: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/5812a0b9-b0cf-4151-9a14-d9f00e4f2858@gmail.com
|
|
|
|
| |
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
| |
This was evidently missed in 05346c131, which renamed that
file to pl_gram.y.
Japin Li
Discussion: https://postgr.es/m/ME0P300MB0445F7CA7456C2AC67D37A01B6092@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As coded, the module was not using pending entries to store its data
locally before doing a flush to the central dshash with a timed
pgstat_report_stat() call. Hence, the flush callback was defined, but
finished by being not used. As a template, this is more efficient than
the original logic of updating directly the shared memory entries as
this reduces the interactions that need to be done with the pgstats
hash table in shared memory.
injection_stats_flush_cb() was also missing a pgstat_unlock_entry(), so
add one, while on it.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Z3JbLhKFFm6kKfT8@ip-10-97-1-34.eu-west-3.compute.internal
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pgoutput caches the attribute map of a relation, that is free()'d only
when validating a RelationSyncEntry. However, this code path is not
taken when calling any of the SQL functions able to do some logical
decoding, like pg_logical_slot_{get,peek}_changes(), leaking some memory
into CacheMemoryContext on repeated calls.
To address this, a relation's attribute map is allocated in
PGOutputData's cachectx, free()'d at the end of the execution of these
SQL functions when logical decoding ends. This is available down to 15.
v13 and v14 have a similar leak, which will be dealt with later.
Reported-by: Masahiko Sawada
Author: Vignesh C
Reviewed-by: Hou Zhijie
Discussion: https://postgr.es/m/CAD21AoDkAhQVSukOfH3_reuF-j4EU0-HxMqU3dU+bSTxsqT14Q@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm1hewNAsZ_e6FF52a=9drmkRJxtEPrzCB6-9mkJyeBBqA@mail.gmail.com
Backpatch-through: 15
|
|
|
|
|
| |
Author: Junwang Zhao
Discussion: https://postgr.es/m/CAEG8a3JbMCHna=N5ZSx6huLnTDfW34kw7Pf2n8+3M-9UrrwesA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the beginning of recovery, an orphaned two-phase file in an epoch
different than the one defined in the checkpoint record could not be
removed based on the assumptions that AdjustToFullTransactionId() relies
on, assuming that all files would be either from the current epoch or
from the previous epoch.
If the checkpoint epoch was 0 while the 2PC file was orphaned and in the
future, AdjustToFullTransactionId() would underflow the epoch used to
build the 2PC file path. In non-assert builds, this would create a
WARNING message referring to a 2PC file with an epoch of "FFFFFFFF" (or
UINT32_MAX), as an effect of the underflow calculation, leaving the
orphaned file around.
Some tests are added with dummy 2PC files in the past and the future,
checking that these are properly removed.
Issue introduced by 5a1dfde8334b, that has switched two-phase state
files to use FullTransactionIds.
Reported-by: Vitaly Davydov
Author: Michael Paquier
Reviewed-by: Vitaly Davydov
Discussion: https://postgr.es/m/13b5b6-676c3080-4d-531db900@47931709
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before 728bd991c3c4, that has improved the support for 2PC files during
recovery, the initial logic scanning files in pg_twophase was done so as
files in the future of the transaction ID horizon were checked first,
followed by a check if a transaction ID is aborted or committed which
could involve a pg_xact lookup. After this commit, these checks have
been done in reverse order.
Files detected as in the future do not have a state that can be checked
in pg_xact, hence this caused recovery to fail abruptly should an
orphaned 2PC file in the future of the transaction ID horizon exist in
pg_twophase at the beginning of recovery.
A test is added to check for this scenario, using an empty 2PC with a
transaction ID large enough to be in the future when running the test.
This test is added in 16 and older versions for now. 17 and newer
versions are impacted by a second bug caused by the addition of the
epoch in the 2PC file names. An equivalent test will be added in these
branches in a follow-up commit, once the second set of issues reported
are fixed.
Author: Vitaly Davydov, Michael Paquier
Discussion: https://postgr.es/m/11e597-676ab680-8d-374f23c0@145466129
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These arrays were sized with Natts_pg_trigger (19) when they should have
been sized with Natts_pg_event_trigger (7). We'd better fix this as
it's clearly a mistake and it could become problematic if
pg_event_trigger were to gain a dozen or so more columns in the future.
No backpatch as there's no actual bug and the column count on those
tables isn't going to change in released versions.
Author: Xin Zhang <zhanghien@qq.com>
Discussion: https://postgr.es/m/tencent_05AD0FB321A414EC3661204D2102AA6EF605@qq.com
|
|
|
|
|
|
|
|
|
| |
Commit 34486b609 effectively redefined isBackgroundWorker as meaning
"not a regular backend", whereas before it had the narrower
meaning of AmBackgroundWorkerProcess(). For clarity, rename the
field to isRegularBackend and invert its sense.
Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cause parallel workers to not check datallowconn, rolcanlogin, and
ACL_CONNECT privileges. The leader already checked these things
(except for rolcanlogin which might have been checked for a different
role). Re-checking can accomplish little except to induce unexpected
failures in applications that might not even be aware that their query
has been parallelized. We already had the principle that parallel
workers rely on their leader to pass a valid set of authorization
information, so this change just extends that a bit further.
Also, modify the ReservedConnections, datconnlimit and rolconnlimit
logic so that these limits are only enforced against regular backends,
and only regular backends are counted while checking if the limits
were already reached. Previously, background processes that had an
assigned database or role were subject to these limits (with rather
random exclusions for autovac workers and walsenders), and the set of
existing processes that counted against each limit was quite haphazard
as well. The point of these limits, AFAICS, is to ensure the
availability of PGPROC slots for regular backends. Since all other
types of processes have their own separate pools of PGPROC slots, it
makes no sense either to enforce these limits against them or to count
them while enforcing the limit.
While edge-case failures of these sorts have been possible for a
long time, the problem got a good deal worse with commit 5a2fed911
(CVE-2024-10978), which caused parallel workers to make some of these
checks using the leader's current role where before we had used its
AuthenticatedUserId, thus allowing parallel queries to fail after
SET ROLE. The previous behavior was fairly accidental and I have
no desire to return to it.
This patch includes reverting 73c9f91a1, which was an emergency hack
to suppress these same checks in some cases. It wasn't complete,
as shown by a recent bug report from Laurenz Albe. We can also revert
fd4d93d26 and 492217301, which hacked around the same problems in one
regression test.
In passing, remove the special case for autovac workers in
CheckMyDatabase; it seems cleaner to have AutoVacWorkerMain pass
the INIT_PG_OVERRIDE_ALLOW_CONNS flag, now that that does what's
needed.
Like 5a2fed911, back-patch to supported branches (which sadly no
longer includes v12).
Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The need for this was missed in commit 93db6cbda, with the result
being that if we launch a slotsync worker it would consume one of
the PGPROCs in the max_connections pool. That could lead to inability
to launch the worker, or to subsequent failures of connection requests
that should have succeeded according to the configured settings.
Rather than create some one-off infrastructure to support this,
let's group the slotsync worker with the existing autovac launcher
in a new category of "special worker" processes. These are kind of
like auxiliary processes, but they cannot use that infrastructure
because they need to be able to run transactions.
For the moment, make these processes share the PGPROC freelist
used for autovac workers (which previously supplied the autovac
launcher too). This is partly to avoid an ABI change in v17,
and partly because it seems silly to have a freelist with
at most two members. This might be worth revisiting if we grow
enough workers in this category.
Tom Lane and Hou Zhijie. Back-patch to v17.
Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit aac2c9b4fde889d13f859c233c2523345e72d32b mandated such locking
and attempted to fulfill that mandate, but it missed REASSIGN OWNED.
Hence, it remained possible to lose VACUUM's inplace update of
datfrozenxid if a REASSIGN OWNED processed that database at the same
time. This didn't affect the other inplace-updated catalog, pg_class.
For pg_class, REASSIGN OWNED calls ATExecChangeOwner() instead of the
generic AlterObjectOwner_internal(), and ATExecChangeOwner() fulfills
the locking mandate.
Like in GRANT, implement this by following the locking protocol for any
catalog subject to the generic AlterObjectOwner_internal(). It would
suffice to do this for IsInplaceUpdateOid() catalogs only. Back-patch
to v13 (all supported versions).
Kirill Reshke. Reported by Alexander Kukushkin.
Discussion: https://postgr.es/m/CAFh8B=mpKjAy4Cuun-HP-f_vRzh2HSvYFG3rhVfYbfEBUhBAGg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adjusts slot_deform_heap_tuple() to add special-case loops to
eliminate much of the branching that was done within the body of the
main deform loop.
Previously, while looping over each attribute to deform,
slot_deform_heap_tuple() would always recheck if the given attribute was
NULL by looking at HeapTupleHasNulls() and if so, went on to check the
tuple's NULL bitmap. Since many tuples won't contain any NULLs, we can
just check HeapTupleHasNulls() once and when there are no NULLs, use a
more compact version of the deforming loop which contains no NULL checking
code at all.
The same is possible for the "slow" mode checking part of the loop. That
variable was checked several times for each attribute, once to determine
if the offset to the attribute value could be taken from the attcacheoff,
and again to check if the offset could be cached for next time.
These "slow" checks can mostly be eliminated by instead having multiple
loops. Initially, we can start in the non-slow loop and break out of
that loop if and only if we must stop caching the offset. This
eliminates branching for both slow and non-slow deforming methods. The
amount of code required for the no nulls / non-slow version is very
small. It's possible to have separate loops like this due to the fact
that once we move into slow mode, we never need to switch back into
non-slow mode for a given tuple.
We have the compiler take care of writing out the multiple required
loops by having a pg_attribute_always_inline function which gets called
various times passing in constant values for the "slow" and "hasnulls"
parameters. This allows the compiler to eliminate const-false branches
and remove comparisons for const-true ones.
This commit has shown overall query performance increases of around 5-20%
in deform-heavy OLAP-type workloads.
Author: David Rowley
Reviewed-by: Victor Yegorov
Discussion: https://postgr.es/m/CAGnEbog92Og2CpC2S8=g_HozGsWtt_3kRS1sXjLz0jKSoCNfLw@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvo9e0XG71WrefYaRv5n4xNPLK4k8LjD0mSR3c9KR2vi2Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if an infinite value was passed to date_trunc(), then the
same infinite value would always be returned regardless of the field
unit given by the caller. This commit updates the function so that an
error is returned when an invalid unit is passed to date_trunc() with an
infinite value.
This matches the behavior of date_trunc() with a finite value and
date_part() with an infinite value, making the handling of interval,
timestamp and timestamptz more consistent across the board for these two
functions.
Some tests are added to cover all these new failure cases, with an
unsupported unit and infinite values for the three data types. There
were no test cases in core that checked all these patterns up to now.
Author: Joseph Koshakow
Discussion: https://postgr.es/m/CAAvxfHc4084dGzEJR0_pBZkDuqbPGc5wn7gK_M0XR_kRiCdUJQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
The totalrows parameter in compute_expr_stats is unused, so remove it.
This is a static function, so the parameter can easily be added again if
it's ever needed.
Author: Ilia Evdokimov <ilya.evdokimov@tantorlabs.ru>
Discussion: https://postgr.es/m/667b92d2-f953-4fcb-9377-3765f5b94187@tantorlabs.com
|
|
|
|
|
|
|
| |
Rename "core_yy_extra_type core_yy" to "core_yy_extra". The previous
name was a bit unclear and confusing. The new name matches the name
used elsewhere for the same purpose, for example in
src/backend/parser/gramparse.h.
|
|
|
|
|
| |
Author: Japin Li
Discussion: https://postgr.es/m/ME0P300MB0445D51BCFA8680F0B35FD6EB60C2@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes "unresolved external symbol" errors with extensions that
use functions from libpgport that need special CFLAGS to
compile. Currently, that includes the CRC-32 functions.
Commit 2571c1d5cc did this for libcommon, but I missed that libpqport
has the same issue.
Reported-by: Tom Lane
Backpatch-through: 16, where Meson was introduced
Discussion: https://www.postgresql.org/message-id/CAOdR5yF0krWrxycA04rgUKCgKugRvGWzzGLAhDZ9bzNv8g0Lag@mail.gmail.com
|
|
|
|
|
|
|
| |
Trying to clean up the code a bit while we're working on these files
for the reentrant scanner/pure parser patches. This cleanup only
touches the code sections after the second '%%' in each file, via a
manually-supervised and locally hacked up pgindent.
|
|
|
|
|
|
|
|
|
|
| |
This fixes "unresolved external symbol" errors with extensions that
use functions from libcommon. This was reported with pgvector.
Reported-by: Andrew Kane
Author: Vladlen Popolitov
Backpatch-through: 16, where Meson was introduced
Discussion: https://www.postgresql.org/message-id/CAOdR5yF0krWrxycA04rgUKCgKugRvGWzzGLAhDZ9bzNv8g0Lag@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Use the flex %option reentrant to make the generated scanner
reentrant, and perhaps eventually thread-safe, but that will require
additional work.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/eb6faeac-2a8a-4b69-9189-c33c520e5b7b@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the flex %option reentrant to make the generated scanner
reentrant and thread-safe. Note: The parser was already pure.
Simplify flex scan buffer management: Instead of constructing the
buffer from pieces and then using yy_scan_buffer(), we can just use
yy_scan_string(), which does the same thing internally. (Actually, we
use yy_scan_bytes() here because we already have the length.)
Use flex yyextra to handle context information, instead of global
variables. This complements the other changes to make the scanner
reentrant.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/eb6faeac-2a8a-4b69-9189-c33c520e5b7b@eisentraut.org
|
|
|
|
| |
Oversight in commit 5bf748b86b.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the flex %option reentrant and the bison option %pure-parser to
make the generated scanner and parser pure, reentrant, and
thread-safe.
Make the generated scanner use palloc() etc. instead of malloc() etc.
Previously, we only used palloc() for the buffer, but flex would still
use malloc() for its internal structures. Now, all the memory is
under palloc() control.
Simplify flex scan buffer management: Instead of constructing the
buffer from pieces and then using yy_scan_buffer(), we can just use
yy_scan_string(), which does the same thing internally.
The previous code was necessary because we allocated the buffer with
palloc() and the rest of the state was handled by malloc(). But this
is no longer the case; everything is under palloc() now.
Use flex yyextra to handle context information, instead of global
variables. This complements the other changes to make the scanner
reentrant.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/eb6faeac-2a8a-4b69-9189-c33c520e5b7b@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the flex %option reentrant and the bison option %pure-parser to
make the generated scanner and parser pure, reentrant, and
thread-safe.
Make the generated scanner use palloc() etc. instead of malloc() etc.
Previously, we only used palloc() for the buffer, but flex would still
use malloc() for its internal structures. As a result, there could be
some small memory leaks in case of uncaught errors. Now, all the
memory is under palloc() control, so there are no more such issues.
Simplify flex scan buffer management: Instead of constructing the
buffer from pieces and then using yy_scan_buffer(), we can just use
yy_scan_string(), which does the same thing internally.
The previous code was necessary because we allocated the buffer with
palloc() and the rest of the state was handled by malloc(). But this
is no longer the case; everything is under palloc() now.
Use flex yyextra to handle context information, instead of global
variables. This complements the other changes to make the scanner
reentrant.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Co-authored-by: Andreas Karlsson <andreas@proxel.se>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/eb6faeac-2a8a-4b69-9189-c33c520e5b7b@eisentraut.org
|
|
|
|
|
|
|
|
|
| |
Per git log, the last time someone tried to do something with
pgrminclude was around 2011. And it's always had a tendency of
causing trouble when it was active. Also, pgcominclude is redundant
with headerscheck.
Discussion: https://www.postgresql.org/message-id/flat/2d4dc7b2-cb2e-49b1-b8ca-ba5f7024f05b%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per git log, the last time someone tried to do something with
pgrminclude was around 2011. Many (not all) of the "pgrminclude
ignore" annotations are of a newer date but seem to have just been
copied around during refactorings and file moves and don't seem to
reflect an actual need anymore.
There have been some parallel experiments with include-what-you-use
(IWYU) annotations, but these don't seem to correspond very strongly
to pgrminclude annotations, so there is no value in keeping the
existing ones even for that kind of thing.
So, wipe them all away. We can always add new ones in the future
based on actual needs.
Discussion: https://www.postgresql.org/message-id/flat/2d4dc7b2-cb2e-49b1-b8ca-ba5f7024f05b%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
5983a4cff added CompactAttribute as an abbreviated alternative to
FormData_pg_attribute to allow more cache-friendly processing in tasks
related to TupleDescs. That commit contained some assert-only code to
check that the CompactAttribute had been populated correctly, however,
the method used to do that checking caused the TupleDesc's
CompactAttribute to be zeroed before it was repopulated and compared to
the snapshot taken before the memset call. This caused issues as the type
cache caches TupleDescs in shared memory which can be used by multiple
backend processes at the same time. There was a window of time between
the zero and repopulation of the CompactAttribute where another process
would mistakenly think that the CompactAttribute is invalid due to the
memset.
To fix this, instead of taking a snapshot of the CompactAttribute and
calling populate_compact_attribute() and comparing the snapshot to the
freshly populated TupleDesc's CompactAttribute, refactor things so we
can just populate a temporary CompactAttribute on the stack. This way
we don't touch the TupleDesc's memory.
Reported-by: Alexander Lakhin, SQLsmith
Discussion: https://postgr.es/m/ca3a256a-5d12-42db-aabe-a75a030d9fb9@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These two platforms have a remarkably tight default limit on the
number of SysV semaphores in the system: SEMMNS is only 60
out-of-the-box. Unless manual action is taken to raise that,
we'll only be able to allocate 3 sets of 16 usable semaphores
each, leading to initdb setting max_connections to just 20.
That's problematic because the core regression tests expect
to be able to launch 20 concurrent sessions, leaving us with
no headroom. This seems to be the cause of intermittent
buildfarm failures on some machines.
While there's no getting around the fact that you'd better raise
SEMMNS for production use on these platforms, it does seem desirable
for "make check" to pass reliably without that. We can make that
happen, at least for awhile longer, with two small changes:
* Change sysv_sema.c's SEMAS_PER_SET to 19, so that we can eat up
all of the available semas not just most of them.
* Change initdb to make the smallest max_connections value it will
consider be 25 not 20.
As of HEAD this will leave us with four free semaphores (using the
default values for other relevant parameters such as max_wal_senders).
So we won't need to consider this again until we've invented five
more background processes. Maybe by then we can switch both these
platforms to some other semaphore API.
For the moment, do this only in master; there've not been field
complaints that might justify a back-patch.
Discussion: https://postgr.es/m/db2773a2-aca0-43d0-99c1-060efcd9954e@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reset btpo_cycleid to 0 in btree_xlog_vacuum for consistency with
_bt_delitems_vacuum (the corresponding original execution code). This
makes things neater.
There might be some performance benefit to being consistent like this.
When btvacuumpage doesn't call _bt_delitems_vacuum, it can still
proactively reset btpo_cycleid to 0 via a separate hint-like update
mechanism (it does so whenever it sees that it isn't already set to 0).
And so it's possible that being consistent about resetting btpo_cycleid
like this will save work later on, after standby promotion: subsequent
VACUUMs won't need to clear btpo_cycleid using the hint-like update
mechanism as often as they otherwise would.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/CAH2-Wz=+LDFxn9NZyEsCo8ifcyKt6+n-VLyygySEHgMz+oynqw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
A historic snapshot should only be used for catalog access, not
general queries. We never call GetTransactionSnapshot() during logical
decoding, which is good because it wouldn't be very sensible, so the
code to deal with that was unreachable and untested. Turn it into an
error, to avoid doing that in the future either.
Discussion: https://www.postgresql.org/message-id/a868fe78-ddb4-4b0a-9b96-873d91d93cfd@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In get_database_list() and get_subscription_list(), the
GetTransactionSnapshot() call is not required because the catalog
table scans use the catalog snapshot, which is held until the end of
the scan. See table_beginscan_catalog(), which calls
RegisterSnapshot(GetCatalogSnapshot(relid)).
In InitPostgres, it's a little less obvious that it's not required,
but still true I believe. All the catalog lookups in InitPostgres()
also use the catalog snapshot, and the looked up values are copied
while still holding the snapshot.
Furthermore, as the removed FIXME comments said, calling
GetTransactionSnapshot() didn't really prevent MyProc->xmin from being
reset anyway.
Discussion: https://www.postgresql.org/message-id/7c56f180-b9e1-481e-8c1d-efa63de3ecbb@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
| |
Jian He reported the src/include/utility/tcop.h one and the remainder
were found by using a script to look for src/* and check that we have a
filename or directory of that name.
In passing, fix some out-date comments.
Reported-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CACJufxGoE3H-7VgO02=PrR4SNuVWDVbfTyUnwO0HvS-Lxurnog@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Library unloading has never been supported with its code removed in
ab02d702ef08, and there were some comments still mentioning that it was
a possible operation.
ChangAo has noticed the incorrect references in dfmgr.c, while I have
noticed the other ones while scanning the rest of the tree for similar
mistakes.
Author: ChangAo Chen, Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/tencent_1D09840A1632D406A610C8C4E2491D74DB0A@qq.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GetSnapshotData() set TransactionXmin = MyProc->xmin, but when
SnapshotResetXmin() advanced MyProc->xmin, it did not advance
TransactionXmin correspondingly. That meant that TransactionXmin could
be older than MyProc->xmin, and XIDs between than TransactionXmin and
the real MyProc->xmin could be vacuumed away. One known consequence is
in pg_subtrans lookups: we might try to look up the status of an XID
that was already truncated away.
Back-patch to all supported versions.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/d27a046d-a1e4-47d1-a95c-fbabe41debb4@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here we convert CompactAttribute.attalign from a char, which is directly
derived from pg_attribute.attalign into a uint8, which stores the number
of bytes to align the column's value by in the tuple.
This allows tuple deformation and tuple size calculations to move away
from using the inefficient att_align_nominal() macro, which manually
checks each TYPALIGN_* char to translate that into the alignment bytes
for the given type. Effectively, this commit changes those to TYPEALIGN
calls, which are branchless and only perform some simple arithmetic with
some bit-twiddling.
The removed branches were often mispredicted by CPUs, especially so in
real-world tables which often contain a mishmash of different types
with different alignment requirements.
Author: David Rowley
Reviewed-by: Andres Freund, Victor Yegorov
Discussion: https://postgr.es/m/CAApHDvrBztXP3yx=NKNmo3xwFAFhEdyPnvrDg3=M0RhDs+4vYw@mail.gmail.com
|
|
|
|
|
| |
Like CurrentSnapshotData, it should not be accessed directly outside
snapmgr.c.
|
|
|
|
|
|
|
|
|
| |
This used to say "nsubxcnt isn't decreased when subtransactions
abort", but there's no variable called nsubxcnt. Commit 8548ddc61b
changed it to "subxcnt", among other typo fixes, but that was wrong
too: the comment actually talks about txn->nsubtxns. That's the field
that's incremented but never decremented and is used for the
allocation earlier in the function.
|
|
|
|
|
|
|
|
|
|
| |
28328ec87b45725 addressed one overflow danger in
SampleHeapTupleVisible() but introduced another, albeit a less likely
one. Modify the binary search code to remove this danger.
Reported-by: Richard Guo
Reviewed-by: Richard Guo, Ranier Vilela
Discussion: https://postgr.es/m/CAMbWs4_bE%2BNscChbKWzw6HZOipCUyXfA5133qvoXQ654D3B2gQ%40mail.gmail.com
|