| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An upcoming patch wants to introduce an additional special case in
this function. To keep that as cheap as possible, minimize the amount
of branching that we do based on whether this is an XLOG_SWITCH
record.
Additionally, and also in the interest of keeping the overhead of
special-case code paths as low as possible, apply likely() to the
non-XLOG_SWITCH case, since only a very tiny fraction of WAL records
will be XLOG_SWITCH records.
Patch by me, reviewed by Dilip Kumar, Amit Kapila, Andres Freund,
and Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmoYy-Vc6G9QKcAKNksCa29cv__czr+N9X_QCxEfQVpp_8w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This could only affect HASH partitioned tables with at least 2 partition
key columns.
If partition pruning was delayed until execution and the query contained
an IS NULL qual on one of the partitioned keys, and some subsequent
partitioned key was being compared to a non-Const, then this could result
in a crash due to the incorrect keyno being used to calculate the
stateidx for the expression evaluation code.
Here we fix this by properly skipping partitioned keys which have a
nullkey set. Effectively, this must be the same as what's going on
inside perform_pruning_base_step().
Sergei Glukhov also provided a patch, but that's not what's being used
here.
Reported-by: Sergei Glukhov
Reviewed-by: tender wang, Sergei Glukhov
Discussion: https://postgr.es/m/d05b26fa-af54-27e1-f693-6c31590802fa@postgrespro.ru
Backpatch-through: 11, where runtime partition pruning was added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_steps_using_prefix_recurse() incorrectly assumed that it could stop
recursive processing of the 'prefix' list when cur_keyno was one before
the step_lastkeyno. Since hash partition pruning can prune using IS
NULL quals, and these IS NULL quals are not present in the 'prefix'
list, then that logic could cause more levels of recursion than what is
needed and lead to there being no more items in the 'prefix' list to
process. This would manifest itself as a crash in some code that
expected the 'start' ListCell not to be NULL.
Here we adjust the logic so that instead of stopping recursion at 1 key
before the step_lastkeyno, we just look at the llast(prefix) item and
ensure we only recursively process up until just before whichever the last
key is. This effectively allows keys to be missing in the 'prefix' list.
This change does mean that step_lastkeyno is no longer needed, so we
remove that from the static functions. I also spent quite some time
reading this code and testing it to try to convince myself that there
are no other issues. That resulted in the irresistible temptation of
rewriting some comments, many of which were just not true or inconcise.
Reported-by: Sergei Glukhov
Reviewed-by: Sergei Glukhov, tender wang
Discussion: https://postgr.es/m/2f09ce72-315e-2a33-589a-8519ada8df61@postgrespro.ru
Backpatch-through: 11, where partition pruning was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new option called BGWORKER_BYPASS_ROLELOGINCHECK to the
flags available to BackgroundWorkerInitializeConnection() and
BackgroundWorkerInitializeConnectionByOid().
This gives the possibility to bgworkers to bypass the role login check,
making possible the use of a role that has no login rights while not
being a superuser. PostgresInit() gains a new flag called
INIT_PG_OVERRIDE_ROLE_LOGIN, taking advantage of the refactoring done in
4800a5dfb4c4.
Regression tests are added to worker_spi to check the behavior of this
new option with bgworkers.
Author: Bertrand Drouvot
Reviewed-by: Nathan Bossart, Michael Paquier, Bharath Rupireddy
Discussion: https://postgr.es/m/bcc36259-7850-4882-97ef-d6b905d2fc51@gmail.com
|
|
|
|
| |
Restore pgindent cleanliness, per buildfarm member koel.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 3fc6e2d7f, I (tgl) argued that we only need to check for
a constant-FALSE restriction clause when there's exactly one
restriction clause, on the grounds that const-folding would have
thrown away anything ANDed with a Const FALSE. That's true just after
const-folding has been applied, but subsequent processing such as
equivalence class expansion could result in cases where a Const FALSE
is ANDed with some other stuff. (Compare for instance joinrels.c's
restriction_is_constant_false.) Hence, tweak this logic to check all
the elements of the baserestrictinfo list, not just one; that's cheap
enough to not be worth worrying about.
There is one existing test case where this visibly improves the plan.
There would not be any savings in runtime, but the planner effort and
executor startup effort will be reduced, and anyway it's odd that
we can detect related cases but not this one.
Richard Guo (independently discovered by David Rowley)
Discussion: https://postgr.es/m/CAMbWs4_x3-CnVVrCboS1LkEhB5V+W7sLSCabsRiG+n7+5_kqbg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
The check is not about processing the startup packet, so the calling
function seems like a more natural place. I'm also working on a patch
that moves 'canAcceptConnections' out of the Port struct, and this
makes that refactoring more convenient.
Reviewed-by: Tristan Partin
Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
|
|
|
|
|
| |
Author: Gurjeet Singh
Discussion: https://postgr.es/m/CABwTF4VFKtKrb78fBnMXwHvOu4a+-7y86siBSEety2knti2eGA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
InitPostgres() has been using a set of boolean arguments to control its
behavior, and a patch under discussion was aiming at expanding it with a
third one. In preparation for expanding this area, this commit switches
all the current boolean arguments of this routine to a single bits32
argument instead. Two values are currently supported for the flags:
- INIT_PG_LOAD_SESSION_LIBS to load [session|local]_preload_libraries at
startup.
- INIT_PG_OVERRIDE_ALLOW_CONNS to allow connection to a database even if
it has !datallowconn. This is used by bgworkers.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZSTn66_BXRZCeaqS@paquier.xyz
|
|
|
|
|
|
|
|
| |
Mark the buffers dirty before writing WAL.
Discussion: https://postgr.es/m/25104133-7df8-cae3-b9a2-1c0aaa1c094a@iki.fi
Reviewed-by: Heikki Linnakangas
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we added the PlannerInfo.all_baserels set, it's not really
necessary to grovel over the rangetable to count baserels in the
current query. So let's drop has_multiple_baserels() in favor
of a bms_membership() test. This might be microscopically
faster, but the main point is to remove some unnecessary code.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_8RcSbbfs1ASZLrMuL0c0EQgXWcoLTQD8swBRY_pQQiA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
The present pg_resetwal code hardcodes the minimum value for -c as 2,
which is FrozenTransactionId, but it's not clear why that is allowed.
After some research, it was probably a mistake in the original patch.
Change it to FirstNormalTransactionId, which matches other xid-related
options in pg_resetwal.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/d09f0e91-8757-642b-1a92-da9a52f5589a%40eisentraut.org
|
|
|
|
|
|
|
| |
This excludes any changes that would change the external AM APIs.
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/14c31f4a-0347-0805-dce8-93a9072c05a5%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While working on a8a968a82, I failed to consider that
cheapest_startup_path can be NULL when there is no non-parameterized
path in the pathlist. This is well documented in set_cheapest(), I just
failed to notice.
Here we adjust the code to just check if the RelOptInfo has a
cheapest_startup_path set before adding it to the startup_subpaths list.
Reported-by: Richard Guo
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs49w3t03V69XhdCuw+GDwivny4uQUxrkVp6Gejaspt0wMQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 608fd198def5390c3490bfe903730207dfd8eeb4.
On 2nd thoughts, the StringInfo API requires that strings are NUL
terminated and pointing directly to the data in a bytea Datum isn't NUL
terminated.
Discussion: https://postgr.es/m/CAApHDvorfO3iBZ=xpiZvp3uHtJVLyFaPBSvcAhAq2HPLnaNSwQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
worker_spi_launch() may report that a worker stopped when it fails to
connect on a database that does not allow connections if the worker
exits before the SQL function checks for the current status of the
worker. The test is switched to use Cluster::psql instead of
safe_psql() so as it does not fail hard when this query errors. While
on it, this removes a query that looks at pg_stat_activity to simplify
the test, as a check on the contents of the server logs achieves the
same when the worker cannot connect to the database without
datallowconn.
Per buildfarm members kestrel, mamba and serinus. Bonus thanks to Tom
Lane for providing the logs of the failure from mamba that the buildfarm
was not able to show up. Note that I have reproduced the failure with a
hardcoded stop point.
Discussion: https://postgr.es/m/3365937.1696801735@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
| |
The comment claimed that it is "called from postmaster", but it is
actually called in the child process, pretty early in the process
initialization. I guess you could interpret "called from postmaster"
to mean that, but it seems wrong to me. Rename the function to be
consistent with other functions with similar role.
Reviewed-by: Thomas Munro
Discussion: https://www.postgresql.org/message-id/4f95c1fc-ad3c-7974-3a8c-6faa3931804c@iki.fi
|
|
|
|
|
|
|
|
|
| |
The child processes don't need them. By allocating them in
PostmasterContext, the memory gets free'd and is made available for
other stuff in the child processes.
Reviewed-by: Thomas Munro
Discussion: https://www.postgresql.org/message-id/4f95c1fc-ad3c-7974-3a8c-6faa3931804c@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In EXEC_BACKEND or single-user mode, we process
shared_preload_libraries at postmaster startup as usual, but also at
backend startup. When a library calls RegisterBackgroundWorker() when
being loaded into a backend process, we go through the motions to add
the worker to BackgroundWorkerList, even though that is a
postmaster-private data structure. Make it return early when called in
a backend process, without changing BackgroundWorkerList.
You could argue that it was intentional: In non-EXEC_BACKEND mode, the
backend processes inherit BackgroundWorkerList at fork(), so it does
make some sense to initialize it to the same state in EXEC_BACKEND
mode, too. It's clearly a postmaster-private structure, though, and
all the functions that use it are clearly marked as "should only be
called in postmaster".
You could also argue that libraries should not call
RegisterBackgroundWorker() during backend startup. It's too late to
correctly register any static background workers at that stage. But
it's a common pattern in extensions, and it doesn't seem worth the
churn to require all extensions to change it.
Another sloppiness was the exception for "internal" background
workers. We checked that RegisterBackgroundWorker() was called during
shared_preload_libraries processing, or the background worker was an
internal one. That exception was made in commit 665d1fad99 to allow
postmaster to register the logical apply launcher in
ApplyLauncherRegister(). The way the check was written, it would not
complain if you registered an internal background worker in a regular
backend process. But it would complain if postmaster registered a
background worker defined in a shared library, outside
shared_preload_libraries processing. I think the correct rule is that
you can only register static background workers in the postmaster
process, and only before the bgworker shared memory array has been
initialized. Check for that more directly.
Reviewed-by: Thomas Munro
Discussion: https://www.postgresql.org/message-id/4f95c1fc-ad3c-7974-3a8c-6faa3931804c@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The serialized representation of an internal aggregate state is a bytea
value. In each deserial function, in order to "receive" the bytea value
we appended it onto a short-lived StringInfoData using
appendBinaryStringInfo. This was a little wasteful as it meant having to
palloc memory, copy a (possibly long) series of bytes then later pfree
that memory. Instead of going to this extra trouble, we can just fake up
a StringInfoData and point the data directly at the bytea's payload. This
should help increase the performance of internal aggregate
deserialization.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAApHDvr=e-YOigriSHHm324a40HPqcUhSp6pWWgjz5WwegR=cQ@mail.gmail.com
|
|
|
|
|
|
|
| |
Additionally, add a missing "the" in a couple of places.
Author: Vignesh C, Dagfinn Ilmari Mannsåker
Discussion: http://postgr.es/m/CALDaNm28t+wWyPfuyqEaARS810Je=dRFkaPertaLAEJYY2cWYQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1349d2790 added code to adjust the PlannerInfo.group_pathkeys so that
ORDER BY / DISTINCT aggregate functions could obtain pre-sorted inputs
to allow faster execution. That commit forgot to adjust the pathkeys in
create_agg_path(). Some code in there assumed that it was always fine
to make the AggPath's pathkeys the same as its subpath's. That seems to
have been ok up until 1349d2790, but since that commit adds pathkeys for
columns which are within the aggregate function, those columns won't be
available above the aggregate node. This can result in "could not find
pathkey item to sort" during create_plan().
The fix here is to strip off the additional pathkeys added by
adjust_group_pathkeys_for_groupagg(). It seems that the pathkeys here
will only ever be group_pathkeys, so all we need to do is check if the
length of the pathkey list is longer than the num_groupby_pathkeys and
get rid of the additional ones only if we see extras.
Reported-by: Justin Pryzby
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/ZQhYYRhUxpW3PSf9%40telsasoft.com
Backpatch-through: 16, where 1349d2790 was introduced
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Going by c4a1933b4, b33ef397a and 05893712c (to name just a few), it seems
that maintaining debug_print_rel() is often forgotten. In the case of
c4a1933b4, it was several years before anyone noticed that a path type
was not handled by debug_print_rel(). (debug_print_rel() is only
compiled when building with OPTIMIZER_DEBUG).
After a quick survey on the pgsql-hackers mailing list, nobody came
forward to admit that they use OPTIMIZER_DEBUG. So to prevent any future
maintenance neglect, let's just remove debug_print_rel() and have
OPTIMIZER_DEBUG make use of pprint() instead (as suggested by Tom Lane).
If anyone wants to come forward to claim they make use of
OPTIMIZER_DEBUG in a way that they need debug_print_rel() then they have
around 10 months remaining in the v17 cycle where we could revert this.
If nobody comes forward in that time, then we can likely safely declare
debug_print_rel() as not worth keeping.
Discussion: https://postgr.es/m/CAApHDvoCdjo8Cu2zEZF4-AxWG-90S+pYXAnoDDa9J3xH-OrczQ@mail.gmail.com
|
|
|
|
|
| |
Reported-by: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_kHMJDak75y1kBTirv-drS1-knT-7Mpg5LprAjqRJDVA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Back in the 8.3 era we discovered that it was problematic if
libpq.so had encoding ID assignments different from the backend,
which is possible because on some platforms libpq.so might be
of a different major version from the calling programs.
psql should use libpq's assignments, but initdb has to use the
backend's, else it will put wrong values into pg_database.
The solution devised in commit 8468146b0 relied on giving initdb
its own copy of encnames.c rather than relying on the functions
exported by libpq. Later, that metamorphosed into ensuring that
libpgcommon got linked before libpq -- which made things OK for
initdb but broke psql. We didn't notice for lack of any changes
in enum pg_enc since then. Commit 06843df4a reversed that, fixing
the latent bug in psql but adding one in initdb. The meson build
infrastructure is also not being sufficiently careful about link
order, and trying to make it so would be equally fragile.
Hence, let's use a new scheme based on giving the libpq-exported
symbols different real names than the same functions exported from
libpgcommon.a or libpgcommon_srv.a. (We could distinguish those
two cases as well, but there seems no need to.) libpq gets the
official names to avoid an ABI break for libpq clients, while the
other cases use #define's to make the real names "xxx_private"
rather than "xxx". By controlling where the #define's are
applied, we can force any particular client program to use one
set or the other of the encnames.c functions.
We cannot back-patch this, since it'd be an ABI break for backend
loadable modules, but there seems little need to. We're just
trying to ensure that the world is safe for hypothetical future
additions to enum pg_enc.
In passing this should fix "duplicate symbol" linker warnings
that we've been seeing on AIX buildfarm members since commit
06843df4a. It's not very clear why that linker is complaining
now, when there were strictly *more* duplicates visible before,
but in any case this should remove the reason for complaint.
Patch by me; thanks to Andres Freund for review.
Discussion: https://postgr.es/m/2385119.1696354473@sss.pgh.pa.us
|
|
|
|
| |
Reported-by: Alexander Lakhin
|
|
|
|
|
|
|
|
|
|
|
| |
There was some discussion what the line length should be. Most output
currently clearly targets around 80 columns, but the maximum in use
currently is 95, so we set that as the current maximum. This just
ensures that there is some guidance and there are no wild deviations.
based on patch by Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://www.postgresql.org/message-id/flat/50ca8ff35a8dd8f9ec89963b503571a7@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
| |
This seemed inconsistent with the --help output of other tools.
Depending on the values, it can cause ugly formatting. Also, we're
not getting the defaults from libpq, we're just emulating the methods
libpq uses to derive these values, so they might not be 100% correct.
Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://www.postgresql.org/message-id/flat/50ca8ff35a8dd8f9ec89963b503571a7@oss.nttdata.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, B-tree code matches every scan key to every item on the page.
Imagine the ordered B-tree scan for the query like this.
SELECT * FROM tbl WHERE col > 'a' AND col < 'b' ORDER BY col;
The (col > 'a') scan key will be always matched once we find the location to
start the scan. The (col < 'b') scan key will match every item on the page
as long as it matches the last item on the page.
This patch implements prechecking of the scan keys required for directional
scan on beginning of page scan. If precheck is successful we can skip this
scan keys check for the items on the page. That could lead to significant
acceleration especially if the comparison operator is expensive.
Idea from patch by Konstantin Knizhnik.
Discussion: https://postgr.es/m/079c3f8e-3371-abe2-e93c-fc8a0ae3f571%40garret.ru
Reviewed-by: Peter Geoghegan, Pavel Borisov
|
|
|
|
|
|
|
|
| |
When syslogger starts up, ListenSockets is still NULL. Don't try to
pfree it. Oversight in commit e29c464395.
Reported-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/ZR-uNkgL7m60lWUe@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A bgworker can spawn parallel workers of its own when executing queries,
and if the worker uses BGWORKER_BYPASS_ALLOWCONN while the database it
is connected to does not allow connections, a parallel worker would fail
to startup. In the case of this module, the step checking for the
presence of the schema to create was spawning a worker, failing the last
test introduced by 991bb0f9653c.
This issue could be reproduced with debug_parallel_query = 'regress',
for example.
Per buildfarm member crake.
|
|
|
|
|
|
|
|
|
| |
This bgworker flag exists in the core code since eed1ce72e159, but was
never tested. This relies on 4f2994647ff1, that has added a way to
start dynamic workers with this flag enabled.
Reviewed-by: Bertrand Drouvot, Bharath Rupireddy
Discussion: https://postgr.es/m/bcc36259-7850-4882-97ef-d6b905d2fc51@gmail.com
|
|
|
|
|
|
|
| |
This was previously handled by the callers but it can be moved into a
common place.
Discussion: https://www.postgresql.org/message-id/flat/52a125e4-ff9a-95f5-9f61-b87cf447e4da@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BuildDescForRelation() main job is to convert ColumnDef lists to
pg_attribute/tuple descriptor arrays, which is really mostly an
internal subroutine of DefineRelation() and some related functions,
which is more the remit of tablecmds.c and doesn't have much to do
with the basic tuple descriptor interfaces in tupdesc.c. This is also
supported by observing the header includes we can remove in tupdesc.c.
By moving it over, we can also (in the future) make
BuildDescForRelation() use more internals of tablecmds.c that are not
sensible to be exposed in tupdesc.c.
Discussion: https://www.postgresql.org/message-id/flat/52a125e4-ff9a-95f5-9f61-b87cf447e4da@eisentraut.org
|
|
|
|
|
|
|
|
|
| |
Previously, this was handled by the callers separately, but it can be
trivially moved into BuildDescForRelation() so that it is handled in a
central place.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/52a125e4-ff9a-95f5-9f61-b87cf447e4da@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Keep track of the used size of the array. That avoids looping through
the whole array in a few places. It doesn't matter from a performance
point of view since the array is small anyway, but this feels less
surprising and is a little less code. Now that we have an explicit
NumListenSockets variable that is statically initialized to 0, we
don't need the loop to initialize the array.
Allocate the array in PostmasterContext. The array isn't needed in
child processes, so this allows reusing that memory. We could easily
make the array resizable now, but we haven't heard any complaints
about the current 64 sockets limit.
Discussion: https://www.postgresql.org/message-id/7bb7ad65-a018-2419-742f-fa5fd877d338@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the JSON code didn't have to worry too much about freeing
JsonLexContext, because it was never too long-lived. With new features
being added for SQL/JSON this is no longer the case. Add a routine
that knows how to free this struct and apply that to a few places, to
prevent this from becoming problematic.
At the same time, we change the API of makeJsonLexContextCstringLen to
make it receive a pointer to JsonLexContext for callers that want it to
be stack-allocated; it can also be passed as NULL to get the original
behavior of a palloc'ed one.
This also causes an ABI break due to the addition of flags to
JsonLexContext, so we can't easily backpatch it. AFAICS that's not much
of a problem; apparently some leaks might exist in JSON usage of
text-search, for example via json_to_tsvector, but I haven't seen any
complaints about that.
Per Coverity complaint about datum_to_jsonb_internal().
Discussion: https://postgr.es/m/20230808174110.oq3iymllsv6amkih@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
| |
6b94e7a6d did this for ordered append paths to allow fast startup
MergeAppends, however, nothing was done for the Append case.
Here we adjust add_paths_to_append_rel() to have it build an AppendPath
containing the cheapest startup paths from each of the child relations
when the append rel has "consider_startup" set.
Author: Andy Fan, David Rowley
Discussion: https://www.postgresql.org/message-id/CAKU4AWrXSkUV=Pt-gRxQT7EbfUeNssprGyNsB=5mJibFZ6S3ww@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Ensure we switch to the per-tuple memory context to prevent any memory
leaks of detoasted Datums in MemoizeHash_hash() and MemoizeHash_equal().
Reported-by: Orlov Aleksej
Author: Orlov Aleksej, David Rowley
Discussion: https://postgr.es/m/83281eed63c74e4f940317186372abfd%40cft.ru
Backpatch-through: 14, where Memoize was added
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The comment
/* On some platforms, readline is declared as readline(char *) */
is obsolete. The casting away of const can be removed.
The const in the readline() prototype was added in GNU readline 4.2,
released in 2001. BSD libedit has also had const in the prototype
since at least 2001.
(The commit that introduced this comment (187e865174) talked about
FreeBSD 4.8, which didn't have readline compatibility in libedit yet,
so it must have been talking about GNU readline in the base system.
This checks out, but already FreeBSD 5 had an updated GNU readline
with const.)
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/862fc1d4-9a0c-d2b6-5451-ee3dc750bcab%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A couple of new options are added to this module to provide more control
on the ways bgworkers are started:
- A new GUC called worker_spi.role to control which role to use by
default when starting a worker.
- worker_spi_launch() gains three arguments: a role OID, a database OID
and flags (currently only BGWORKER_BYPASS_ALLOWCONN). By default, the
role OID and the database OID are InvalidOid, in which case the worker
would use the related GUCs.
Workers loaded by shared_preload_libraries use the default values
provided by the GUCs, with flags at 0. The options are given to the
main bgworker routine through bgw_extra. A test case is tweaked to
start two dynamic workers with databases and roles defined by the caller
of worker_spi_launch().
These additions will have the advantage of expanding the tests for
bgworkers, for at least two cases:
- BGWORKER_BYPASS_ALLOWCONN has no coverage in the core tree.
- A new bgworker flag is under discussion, and this eases the
integration of new tests.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/bcc36259-7850-4882-97ef-d6b905d2fc51@gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
Two custom wait events are added here:
- "TestShmMqBgWorkerStartup", when setting up a set of bgworkers in
wait_for_workers_to_become_ready().
- "TestShmMqMessageQueue", when waiting for a queued message in
test_shm_mq_pipelined().
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
| |
This naming is more consistent with all the other user-facing wait event
strings. Other in-core modules will use the same naming convention, so
let's be consistent here as well.
Extracted from a larger patch by the same author.
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
|
|
|
|
|
|
|
|
|
| |
There was only one caller left, for which this function was overkill.
Also, having it in relcache.c was inappropriate, since it doesn't work
with the relcache at all.
Discussion: https://www.postgresql.org/message-id/flat/f84640e3-00d3-5abd-3f41-e6a19d33c40b@eisentraut.org
|
|
|
|
|
|
|
|
|
| |
It is unnecessary to include this field in IndexInfo. It is only used
by DDL code, not during execution. It is really only used to pass
local information around between functions in index.c and indexcmds.c,
for which it is clearer to use local variables, like in similar cases.
Discussion: https://www.postgresql.org/message-id/flat/f84640e3-00d3-5abd-3f41-e6a19d33c40b@eisentraut.org
|
|
|
|
|
|
|
|
|
|
| |
In hopes of putting these where any would-be implementer is sure to
find them, make a placeholder grammar production for ALTER DROP VALUE
and put them there. This is really just a docs patch, though.
Vik Fearing, with a bit more wordsmithing by me
Discussion: https://postgr.es/m/9fffd149-da0f-0c9c-6745-731fb688642a@postgresfriends.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This further reduces the length and complexity of sendFile(),
hopefully make it easier to understand and modify. In addition
to moving some logic into a new function, I took this opportunity
to make a few slight adjustments to sendFile() itself, including
renaming the 'len' variable to 'bytes_done', since we use it to represent
the number of bytes we've already handled so far, not the total
length of the file.
Patch by me, reviewed by David Steele.
Discussion: http://postgr.es/m/CA+TgmoYt5jXH4U6cu1dm9Oe2FTn1aae6hBNhZzJJjyjbE_zYig@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If checksum verification fails for a particular page, we reread the
page and try one more time. The code that does this somewhat complex
and difficult to follow. Move some of the logic into a new function
and rearrange the code a bit to try to make it clearer. This way,
we don't need the block_retry Boolean, a couple of other variables
move from sendFile() into the new function, and some code is now less
deeply indented.
Patch by me, reviewed by David Steele.
Discussion: http://postgr.es/m/CA+TgmoYt5jXH4U6cu1dm9Oe2FTn1aae6hBNhZzJJjyjbE_zYig@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code in charge of copying the contents of PgBackendStatus to local
memory could fail on memory allocation because of an overflow on the
amount of memory to use. The overflow can happen when combining a high
value track_activity_query_size (max at 1MB) with a large
max_connections, when both multiplied get higher than INT32_MAX as both
parameters treated as signed integers. This could for example trigger
with the following functions, all calling pgstat_read_current_status():
- pg_stat_get_backend_subxact()
- pg_stat_get_backend_idset()
- pg_stat_get_progress_info()
- pg_stat_get_activity()
- pg_stat_get_db_numbackends()
The change to use MemoryContextAllocHuge() has been introduced in
8d0ddccec636, so backpatch down to 12.
Author: Jakub Wartak
Discussion: https://postgr.es/m/CAKZiRmw8QSNVw2qNK-dznsatQqz+9DkCquxP0GHbbv1jMkGHMA@mail.gmail.com
Backpatch-through: 12
|