| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OpenSSL in FIPS mode rejects several encrypted private keys used in
the test suites ssl and ssl_passphrase_callback. This is because they
are in a "traditional" OpenSSL format that uses MD5 for key
generation. The fix is to convert them to the more standard PKCS#8
format that uses SHA1 for key derivation.
This commit contains the converted keys, with the conversion done like
this:
openssl pkcs8 -topk8 -in src/test/modules/ssl_passphrase_callback/server.key -passin pass:FooBaR1 -out src/test/modules/ssl_passphrase_callback/server.key.new -passout pass:FooBaR1
mv src/test/modules/ssl_passphrase_callback/server.key.new src/test/modules/ssl_passphrase_callback/server.key
etc., as well as updated build rules to generate the keys in the new
format if they need to be regenerated.
Reviewed-by: Jacob Champion <jchampion@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/64de784b-8833-e055-3bd4-7420e6675351%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We now create contype='n' pg_constraint rows for not-null constraints.
We propagate these constraints to other tables during operations such as
adding inheritance relationships, creating and attaching partitions and
creating tables LIKE other tables. We also spawn not-null constraints
for inheritance child tables when their parents have primary keys.
These related constraints mostly follow the well-known rules of
conislocal and coninhcount that we have for CHECK constraints, with some
adaptations: for example, as opposed to CHECK constraints, we don't
match not-null ones by name when descending a hierarchy to alter it,
instead matching by column name that they apply to. This means we don't
require the constraint names to be identical across a hierarchy.
For now, we omit them for system catalogs. Maybe this is worth
reconsidering. We don't support NOT VALID nor DEFERRABLE clauses
either; these can be added as separate features later (this patch is
already large and complicated enough.)
psql shows these constraints in \d+.
pg_dump requires some ad-hoc hacks, particularly when dumping a primary
key. We now create one "throwaway" not-null constraint for each column
in the PK together with the CREATE TABLE command, and once the PK is
created, all those throwaway constraints are removed. This avoids
having to check each tuple for nullness when the dump restores the
primary key creation.
pg_upgrading from an older release requires a somewhat brittle procedure
to create a constraint state that matches what would be created if the
database were being created fresh in Postgres 17. I have tested all the
scenarios I could think of, and it works correctly as far as I can tell,
but I could have neglected weird cases.
This patch has been very long in the making. The first patch was
written by Bernd Helmle in 2010 to add a new pg_constraint.contype value
('n'), which I (Álvaro) then hijacked in 2011 and 2012, until that one
was killed by the realization that we ought to use contype='c' instead:
manufactured CHECK constraints. However, later SQL standard
development, as well as nonobvious emergent properties of that design
(mostly, failure to distinguish them from "normal" CHECK constraints as
well as the performance implication of having to test the CHECK
expression) led us to reconsider this choice, so now the current
implementation uses contype='n' again. During Postgres 16 this had
already been introduced by commit e056c557aef4, but there were some
problems mainly with the pg_upgrade procedure that couldn't be fixed in
reasonable time, so it was reverted.
In 2016 Vitaly Burovoy also worked on this feature[1] but found no
consensus for his proposed approach, which was claimed to be closer to
the letter of the standard, requiring an additional pg_attribute column
to track the OID of the not-null constraint for that column.
[1] https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Bernd Helmle <mailings@oopsware.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new view, wrapped around a SRF, shows some information known about
wait events, as of:
- Name.
- Type (Activity, I/O, Extension, etc.).
- Description.
All the information retrieved comes from wait_event_names.txt, and the
description is the same as the documentation with filters applied to
remove any XML markups. This view is useful when joined with
pg_stat_activity to get the description of a wait event reported.
Custom wait events for extensions are included in the view.
Original idea by Yves Colin.
Author: Bertrand Drouvot
Reviewed-by: Kyotaro Horiguchi, Masahiro Ikeda, Tom Lane, Michael
Paquier
Discussion: https://postgr.es/m/0e2ae164-dc89-03c3-cf7f-de86378053ac@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The OAT hooks are added in ALTER TABLE for the following subcommands:
- { ENABLE | DISABLE | [NO] FORCE } ROW LEVEL SECURITY
- { ENABLE | DISABLE } TRIGGER
- { ENABLE | DISABLE } RULE. Note that there was hook for pg_rewrite,
but not for relation ALTER'ed in pg_class.
Tests are added to test_oat_hook for all the subcommand patterns gaining
hooks here. Based on an ask from Legs Mansion.
Discussion: https://postgr.es/m/tencent_083B3850655AC6EE04FA0A400766D3FE8309@qq.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the names of the custom wait event must be registered for
each backend, requiring all these to link to the shared memory area of
an extension, even if these are not loaded with
shared_preload_libraries.
This patch relaxes the constraints related to this infrastructure by
storing the wait events and their names in two dynamic hash tables in
shared memory. This has the advantage to simplify the registration of
custom wait events to a single routine call that returns an event ID
ready for consumption:
uint32 WaitEventExtensionNew(const char *wait_event_name);
The caller of this routine can then cache locally the ID returned, to be
used for pgstat_report_wait_start(), WaitLatch() or a similar routine.
The implementation uses two hash tables: one with a key based on the
event name to avoid duplicates and a second using the event ID as key
for event lookups, like on pg_stat_activity. These tables can hold a
minimum of 16 entries, and a maximum of 128 entries, which should be plenty
enough.
The code changes done in worker_spi show how things are simplified (most
of the code removed in this commit comes from there):
- worker_spi_init() is gone.
- No more shared memory hooks required (size requested and
initialization).
- The custom wait event ID is cached in the process that needs to set
it, with one single call to WaitEventExtensionNew() to retrieve it.
Per suggestion from Andres Freund.
Author: Masahiro Ikeda, with a few tweaks from me.
Discussion: https://postgr.es/m/20230801032349.aaiuvhtrcvvcwzcx@awork3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Substituting such values in extension scripts facilitated SQL injection
when @extowner@, @extschema@, or @extschema:...@ appeared inside a
quoting construct (dollar quoting, '', or ""). No bundled extension was
vulnerable. Vulnerable uses do appear in a documentation example and in
non-bundled extensions. Hence, the attack prerequisite was an
administrator having installed files of a vulnerable, trusted,
non-bundled extension. Subject to that prerequisite, this enabled an
attacker having database-level CREATE privilege to execute arbitrary
code as the bootstrap superuser. By blocking this attack in the core
server, there's no need to modify individual extensions. Back-patch to
v11 (all supported versions).
Reported by Micah Gate, Valerie Woolard, Tim Carey-Smith, and Christoph
Berg.
Security: CVE-2023-39417
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two backend routines are added to allow extension to allocate and define
custom wait events, all of these being allocated in the type
"Extension":
* WaitEventExtensionNew(), that allocates a wait event ID computed from
a counter in shared memory.
* WaitEventExtensionRegisterName(), to associate a custom string to the
wait event ID allocated.
Note that this includes an example of how to use this new facility in
worker_spi with tests in TAP for various scenarios, and some
documentation about how to use them.
Any code in the tree that currently uses WAIT_EVENT_EXTENSION could
switch to this new facility to define custom wait events. This is left
as work for future patches.
Author: Masahiro Ikeda
Reviewed-by: Andres Freund, Michael Paquier, Tristan Partin, Bharath
Rupireddy
Discussion: https://postgr.es/m/b9f5411acda0cf15c8fbb767702ff43e@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The second portion of the tests had a race condition where it would be
possible for the startup of the dynamic workers to fail, in the event
where the static workers started before them with the library loading in
shared_preload_libraries did not finish to create their respective
schemas. The conflict is caused by the fact that the dynamic and static
workers used the same IDs, overlapping each other, so, for now, switch
the dynamic workers to use different IDs, leading to different schemas
created.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20230728022332.egqzobhskmlf6ntr@awork3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit moves worker_spi to use TAP tests. sql/worker_spi.sql is
gone, replaced by an equivalent set of queries in a TAP script, without
worker_spi loaded in shared_preload_libraries:
- One query to launch a worker dynamically, relying now on "postgres" as
the default database to connect to.
- Two wait queries with poll_query_until(), one to wait for the worker
schema to be initialized and a second to wait for a tuple processed by
the worker.
- Server reload to accelerate the main loop of the spawned worker.
More coverage is added for workers registered when the library is loaded
with shared_preload_libraries, while on it, checking that these are
connecting to the database set in the GUC worker_spi.database.
A local run of these test is showing that TAP is slightly faster than
the original, while providing more coverage (3.7s vs 4.4s). There was
also some discussions about keeping the SQL tests, but this would
require initializing twice a cluster, increasing the runtime of the
tests up to 5.6s here.
These tests will be expanded more in an upcoming patch aimed at adding
support for custom wait events for the Extension class, still under
discussion, to check the new in-core APIs with and without a library set
in shared_preload_libraries.
Bharath has written the part where shared_preload_libraries is used,
while I have migrated the existing SQL tests to TAP.
Author: Bharath Rupireddy, Michael Paquier
Reviewed-by: Masahiro Ikeda
Discussion: https://postgr.es/m/CALj2ACWR9ncAiDF73unqdJF1dmsA2R0efGXX2624X+YVxcAVWg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gives a way to make a difference between workers registered when
the library is loaded with shared_preload_libraries and when these are
launched dynamically, in ps output or pg_stat_activity.
Extracted from a larger patch by the same author.
Author: Bharath Rupireddy
Reviewed-by: Masahiro Ikeda
Discussion: https://postgr.es/m/CALj2ACWR9ncAiDF73unqdJF1dmsA2R0efGXX2624X+YVxcAVWg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the database name to connect is initialized only when the
module is loaded with shared_preload_libraries, causing any call of
worker_spi_launch() to fail if the library is not loaded for a dynamic
bgworker launch. Rather than making the GUC defining the database to
connect to a PGC_POSTMASTER, this commit switches worker_spi.database to
PGC_SIGHUP, loaded even if the module's library is loaded dynamically
for a worker.
We have been discussing about the integration of more advanced tests in
this module, with and without shared_preload_libraries set, so this
eases a bit the work planned in this area.
No backpatch is done as, while this is a bug, it changes the definition
of worker_spi.database.
Author: Masahiro Ikeda
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/d30d3ea7d21cb7c9e1e3cc47e301f1b6@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As coded, the code would use as a base comparison the namespace OID from
the first object scanned in pg_depend when switching its namespace
dependency entry to the new one, and use it as a base of comparison for
any follow-up checks. It would also be used as the old namespace OID to
switch *from* for the extension's pg_depend entry. Hence, if the first
object scanned has a namespace different than the one stored in the
extension, we would finish by:
- Not checking that the extension objects map with the extension's
schema.
- Not switching the extension -> namespace dependency entry to the new
namespace provided by the user, making ALTER EXTENSION ineffective.
This issue exists since this command has been introduced in d9572c4 for
relocatable extension, so backpatch all the way down to 11. The test
case has been provided by Heikki, that I have tweaked a bit to show the
effects on pg_depend for the extension.
Reported-by: Heikki Linnakangas
Author: Michael Paquier, Heikki Linnakangas
Discussion: https://postgr.es/m/20eea594-a05b-4c31-491b-007b6fceef28@iki.fi
Backpatch-through: 11
|
|
|
|
|
|
|
|
| |
Makefile does not specify ENCODING, meson.build should not,
either. Oversight in commit 877bf52cff.
Discussion: https://postgr.es/m/ZKYpvvNQdbQuRDGx@paquier.xyz
Reported-by: Michael Paquier
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following routines are added to libpq:
PGresult *PQclosePrepared(PGconn *conn, const char *stmt);
PGresult *PQclosePortal(PGconn *conn, const char *portal);
int PQsendClosePrepared(PGconn *conn, const char *stmt);
int PQsendClosePortal(PGconn *conn, const char *portal);
The "send" routines are non-blocking versions of the two others.
Close messages are part of the protocol but they did not have a libpq
implementation. And, having these routines is for instance useful with
connection poolers as these can detect more easily Close messages
than DEALLOCATE queries.
The implementation takes advantage of what the Describe routines rely on
for portals and statements. Some regression tests are added in
libpq_pipeline, for the four new routines, by closing portals and
statements created already by the tests.
Author: Jelte Fennema
Reviewed-by: Jian He, Michael Paquier
Discussion: https://postgr.es/m/CAGECzQTb4xFAopAVokudB+L62Kt44mNAL4Z9zZ7UTrs1TRFvWA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following changes are done:
- Addition of WaitEventBufferPin and WaitEventExtension, that hold a
list of wait events related to each category.
- Addition of two functions that encapsulate the list of wait events for
each category.
- Rename BUFFER_PIN to BUFFERPIN (only this wait event class used an
underscore, requiring a specific rule in the automation script).
These changes make a bit easier the automatic generation of all the code
and documentation related to wait events, as all the wait event
categories are now controlled by consistent structures and functions.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/c6f35117-4b20-4c78-1df5-d3056010dcf5@gmail.com
Discussion: https://postgr.es/m/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com
|
|
|
|
|
|
|
|
|
|
| |
One of the tests for the pipeline mode with portal description expects a
non-NULL PQgetResult, but used an incorrect error message on failure,
telling that PQgetResult being NULL was the expected result.
Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQTkShHecFF+EZrm94Lbsu2ej569T=bz+PjMbw9Aiioxuw@mail.gmail.com
Backpatch-through: 14
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specify --no-locale and --encoding=UTF8 to be consistent with the
Makefile, which specifies NO_LOCALE=1. Fixes test for some locales
when meson is used and ICU is disabled. May have been an oversight in
e6927270cd.
Also switch argument order in unaccent/meson.build to make it
consistent in style.
Discussion: https://postgr.es/m/CABwTF4Wz41pNMJ9q3tpH=6mnvg6aopDU5Lzvers5=6=WJVekww@mail.gmail.com
Author: Gurjeet Singh
Author: Jeff Davis
|
|
|
|
| |
This reverts commit 05e17373517114167d002494e004fa0aa32d1fd1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While executing maintenance operations (ANALYZE, CLUSTER, REFRESH
MATERIALIZED VIEW, REINDEX, or VACUUM), set search_path to
'pg_catalog, pg_temp' to prevent inconsistent behavior.
Functions that are used for functional indexes, in index expressions,
or in materialized views and depend on a different search path must be
declared with CREATE FUNCTION ... SET search_path='...'.
This change addresses a security risk introduced in commit 60684dd834,
where a role with MAINTAIN privileges on a table may be able to
escalate privileges to the table owner. That commit is not yet part of
any release, so no need to backpatch.
Discussion: https://postgr.es/m/e44327179e5c9015c8dda67351c04da552066017.camel%40j-davis.com
Reviewed-by: Greg Stark
Reviewed-by: Nathan Bossart
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Run pgindent, pgperltidy, and reformat-dat-files.
This set of diffs is a bit larger than typical. We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop). We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up. Going
forward, that should make for fewer random-seeming changes to existing
code.
Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
| |
Commit 3581cbdcd6 added a flag to identify empty BRIN ranges. This adds
the new flag to brin_page_items() output.
This is kept as a separate commit as it should not be backpatched.
Reviewed-by: Justin Pryzby, Matthias van de Meent, Alvaro Herrera
Discussion: https://postgr.es/m/402430e4-7d9d-6cf1-09ef-464d80afff3b@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BRIN indexes did not properly distinguish between summaries for empty
(no rows) and all-NULL ranges, treating them as essentially the same
thing. Summaries were initialized with allnulls=true, and opclasses
simply reset allnulls to false when processing the first non-NULL value.
This however produces incorrect results if the range starts with a NULL
value (or a sequence of NULL values), in which case we forget the range
contains NULL values when adding the first non-NULL value.
This happens because the allnulls flag is used for two separate
purposes - to mark empty ranges (not representing any rows yet) and
ranges containing only NULL values.
Opclasses don't know which of these cases it is, and so don't know
whether to set hasnulls=true. Setting the flag in both cases would make
it correct, but it would also make BRIN indexes useless for queries with
IS NULL clauses. All ranges start empty (and thus allnulls=true), so all
ranges would end up with either allnulls=true or hasnulls=true.
The severity of the issue is somewhat reduced by the fact that it only
happens when adding values to an existing summary with allnulls=true.
This can happen e.g. for small tables (because a summary for the first
range exists for all BRIN indexes), or for tables with large fraction of
NULL values in the indexed columns.
Bulk summarization (e.g. during CREATE INDEX or automatic summarization)
that processes all values at once is not affected by this issue. In this
case the flags were updated in a slightly different way, not forgetting
the NULL values.
To identify empty ranges we use a new flag, stored in an unused bit in
the BRIN tuple header so the on-disk format remains the same. A matching
flag is added to BrinMemTuple, into a 3B gap after bt_placeholder.
That means there's no risk of ABI breakage, although we don't actually
pass the BrinMemTuple to any public API.
We could also skip storing index tuples for empty summaries, but then
we'd have to always process such ranges - even if there are no rows in
large parts of the table (e.g. after a bulk DELETE), it would still
require reading the pages etc. So we store them, but ignore them when
building the bitmap.
Backpatch to 11. The issue exists since BRIN indexes were introduced in
9.5, but older releases are already EOL.
Backpatch-through: 11
Reviewed-by: Justin Pryzby, Matthias van de Meent, Alvaro Herrera
Discussion: https://postgr.es/m/402430e4-7d9d-6cf1-09ef-464d80afff3b@enterprisedb.com
|
|
|
|
|
|
|
|
|
| |
This reverts commit 096dd80f3ccc and its fixups beecbe8e5001, afdd9f7f0e00,
529da086ba, db93e739ac61.
Catversion is bumped.
Discussion: https://postgr.es/m/d46f9265-ff3c-6743-2278-6772598233c2%40pgmasters.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Give the new GUC introduced by d4e71df6 a name that is clearly not
intended for mainstream use quite yet.
Future proposals would drop the prefix only after adding infrastructure
to make it efficient. Having the switch in the tree sooner is good
because it might lead to new discoveries about the hazards awaiting us
on a wide range of systems, but that name was too enticing and could
lead to cross-version confusion in future, per complaints from Noah and
Justin.
Suggested-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (the idea, not the patch)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (ditto)
Discussion: https://postgr.es/m/20230430041106.GA2268796%40rfd.leadboat.com
|
|
|
|
|
| |
Author: Alexander Lakhin
Discussion: https://postgr.es/m/d5f68d19-c0fc-91a9-118d-7c6a5a3f5fad@gmail.com
|
|
|
|
|
|
| |
Author: Justin Pryzby
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/ZD3D1QxoccnN8A1V@telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes many spelling mistakes in comments, but a few references to
invalid parameter names, function names and option names too in comments
and also some in string constants
Also, fix an #undef that was undefining the incorrect definition
Author: Alexander Lakhin
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/d5f68d19-c0fc-91a9-118d-7c6a5a3f5fad@gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make sure that function declarations use names that exactly match the
corresponding names from function definitions in a few places. These
inconsistencies were all introduced relatively recently, after the code
base had parameter name mismatches fixed in bulk (see commits starting
with commits 4274dc22 and 035ce1fe).
pg_bsd_indent still has a couple of similar inconsistencies, which I
(pgeoghegan) have left untouched for now.
Like all earlier commits that cleaned up function parameter names, this
commit was written with help from clang-tidy.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test previously had a list of OSes that direct I/O was expected to
work on. That worked well enough for the systems in our build farm, but
didn't survive contact with the Debian build bots running on tmpfs via
overlayfs. tmpfs does not support O_DIRECT, but we don't want to
exclude Linux generally.
The new approach is to try to create an empty file with O_DIRECT from
Perl first. If that fails, we'll skip the test and report what the
error was.
Reported-by: Christoph Berg <myon@debian.org>
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/ZDYd4A78cT2ULxZZ%40msg.df7cb.de
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit e056c557aef4 and minor later fixes thereof.
There's a few problems in this new feature -- most notably regarding
pg_upgrade behavior, but others as well. This new feature is not in any
way critical on its own, so instead of scrambling to fix it we revert it
and try again in early 17 with these issues in mind.
Discussion: https://postgr.es/m/3801207.1681057430@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new direct I/O test deliberately uses a very small shared_buffers to
force some disk transfers without making the data set large and slow,
but ran into a problem with wal_level = minimal: log_newpage_range()
pins many buffers, leading to a few intermittent "no unpinned buffers
available" errors.
We could presumably fix that by adjusting shared_buffers, but crake
seems to be trying to tell us something interesting with these settings,
so let's just avoid wal_level = minimal in this test for now.
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20230408060408.n7xdwk3mxj5oykt6%40awork3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, a PostgreSQL-specific callback checked by the regex engine
had a way to trigger a special error code REG_CANCEL if it detected that
the next call to CHECK_FOR_INTERRUPTS() would certainly throw via
ereport().
A later proposed bugfix aims to move some complex logic out of signal
handlers, so that it won't run until the next CHECK_FOR_INTERRUPTS(),
which makes the above design impossible unless we split
CHECK_FOR_INTERRUPTS() into two phases, one to run logic and another to
ereport(). We may develop such a system in the future, but for the
regex code it is no longer necessary.
An earlier commit moved regex memory management over to our
MemoryContext system. Given that the purpose of the two-phase interrupt
checking was to free memory before throwing, something we don't need to
worry about anymore, it seems simpler to inject CHECK_FOR_INTERRUPTS()
directly into cancelation points, and just let it throw.
Since the plan is to keep PostgreSQL-specific concerns separate from the
main regex engine code (with a view to bein able to stay in sync with
other projects), do this with a new macro INTERRUPT(), customizable in
regcustom.h and defaulting to nothing.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA%2BhUKGK3PGKwcKqzoosamn36YW-fsuTdOPPF1i_rtEO%3DnEYKSg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Provide a way to ask the kernel to use O_DIRECT (or local equivalent)
where available for data and WAL files, to avoid or minimize kernel
caching. This hurts performance currently and is not intended for end
users yet. Later proposed work would introduce our own I/O clustering,
read-ahead, etc to replace the facilities the kernel disables with this
option.
The only user-visible change, if the developer-only GUC is not used, is
that this commit also removes the obscure logic that would activate
O_DIRECT for the WAL when wal_sync_method=open_[data]sync and
wal_level=minimal (which also requires max_wal_senders=0). Those are
non-default and unlikely settings, and this behavior wasn't (correctly)
documented. The same effect can be achieved with io_direct=wal.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGK1X532hYqJ_MzFWt0n1zt8trz980D79WbjwnT-yYLZpg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We now create pg_constaint rows for NOT NULL constraints with
contype='n'.
We propagate these constraints during operations such as adding
inheritance relationships, creating and attaching partitions, creating
tables LIKE other tables. We mostly follow the well-known rules of
conislocal and coninhcount that we have for CHECK constraints, with some
adaptations; for example, as opposed to CHECK constraints, we don't
match NOT NULL ones by name when descending a hierarchy to alter it;
instead we match by column number. This means we don't require the
constraint names to be identical across a hierarchy.
For now, we omit them from system catalogs. Maybe this is worth
reconsidering. We don't support NOT VALID nor DEFERRABLE clauses
either; these can be added as separate features later (this patch is
already large and complicated enough.)
This has been very long in the making. The first patch was written by
Bernd Helmle in 2010 to add a new pg_constraint.contype value ('n'),
which I (Álvaro) then hijacked in 2011 and 2012, until that one was
killed by the realization that we ought to use contype='c' instead:
manufactured CHECK constraints. However, later SQL standard
development, as well as nonobvious emergent properties of that design
(mostly, failure to distinguish them from "normal" CHECK constraints as
well as the performance implication of having to test the CHECK
expression) led us to reconsider this choice, so now the current
implementation uses contype='n' again.
In 2016 Vitaly Burovoy also worked on this feature[1] but found no
consensus for his proposed approach, which was claimed to be closer to
the letter of the standard, requiring additional pg_attribute columns to
track the OID of the NOT NULL constraint for that column.
[1] https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Bernd Helmle <mailings@oopsware.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/CACA0E642A0267EDA387AF2B%40%5B172.26.14.62%5D
Discussion: https://postgr.es/m/AANLkTinLXMOEMz+0J29tf1POokKi4XDkWJ6-DDR9BKgU@mail.gmail.com
Discussion: https://postgr.es/m/20110707213401.GA27098@alvh.no-ip.org
Discussion: https://postgr.es/m/1343682669-sup-2532@alvh.no-ip.org
Discussion: https://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com
Discussion: https://postgr.es/m/20220817181249.q7qvj3okywctra3c@alvherre.pgsql
|
|
|
|
|
|
|
|
| |
PL/pgSQL was misspelled in a few places, so fix these.
Author: Zhang Mingli
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/1bd41572-9cd9-465e-9f59-ee45385e51b4@Spark
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
@extschema:name@ extends the existing @extschema@ feature so that
we can also insert the schema name of some required extension,
thus making cross-extension references robust even if they are in
different schemas.
However, this has the same hazard as @extschema@: if the schema
name is embedded literally in an installed object, rather than being
looked up once during extension script execution, then it's no longer
safe to relocate the other extension to another schema. To deal with
that without restricting things unnecessarily, add a "no_relocate"
option to extension control files. This allows an extension to
specify that it cannot handle relocation of some of its required
extensions, even if in themselves those extensions are relocatable.
We detect "no_relocate" requests of dependent extensions during
ALTER EXTENSION SET SCHEMA.
Regina Obe, reviewed by Sandro Santilli and myself
Discussion: https://postgr.es/m/003001d8f4ae$402282c0$c0678840$@pcorp.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When determining whether an index update may be skipped by using HOT, we
can ignore attributes indexed by block summarizing indexes without
references to individual tuples that need to be cleaned up.
A new type TU_UpdateIndexes provides a signal to the executor to
determine which indexes to update - no indexes, all indexes, or only the
summarizing indexes.
This also removes rd_indexattr list, and replaces it with rd_attrsvalid
flag. The list was not used anywhere, and a simple flag is sufficient.
This was originally committed as 5753d4ee32, but then got reverted by
e3fcca0d0d because of correctness issues.
Original patch by Josef Simanek, various fixes and improvements by Tomas
Vondra and me.
Authors: Matthias van de Meent, Josef Simanek, Tomas Vondra
Reviewed-by: Tomas Vondra, Alvaro Herrera
Discussion: https://postgr.es/m/05ebcb44-f383-86e3-4f31-0a97a55634cf@enterprisedb.com
Discussion: https://postgr.es/m/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
Mainly move some detail from errmsg to errdetail, remove explicit
mention of superuser where appropriate, since that is implied in most
permission checks, and make messages more uniform.
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/20230316234701.GA903298@nathanxps13
|
|
|
|
| |
This bit has been forgotten in 419a8dd.
|
|
|
|
|
|
|
|
|
|
|
| |
The hook can be installed by a shared_preload library.
A similar mechanism could be used for radius paswords, for example, and
the type name auth_password_hook_typ has been shosen with that in mind.
John Naylor and Andrew Dunstan
Discussion: https://postgr.es/m/469b06ed-69de-ba59-c13a-91d2372e52a9@dunslane.net
|
|
|
|
|
|
|
|
|
|
|
| |
This was an omission in the original creation of the module.
Also slightly adjust some wording to avoid a double "is".
Backpatch the non-meson piece of this to release 12, where the module
was introduced.
Discussion: https://postgr.es/m/be869e1c-8e3f-4cde-8609-212c899cccf9@dunslane.net
|
|
|
|
|
|
|
|
|
|
| |
Using `require` / `->import` instead of `use` avoids the use of a
"stringy eval", making for cleaner code that we don't need to silence
perlcritic about.
Per Andrew Dunstan
Discussion: https://postgr.es/m/7cd3bbbd-0216-4436-d571-8f80c9259a07@dunslane.net
|
|
|
|
|
|
|
|
| |
When one of these tests fails to match the trace, this better shows what
the problem is.
Discussion: https://postgr.es/m/20220617183150.ilgokxp22mzywnhh@alvherre.pgsql
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, meson installed modules under src/test/modules/ as part of
a normal installation, even though these files are only meant for use
by tests. This is because there is no way to set up up the build
system to install extra things only when told.
This patch fixes that with a workaround: We don't install these
modules as part of meson install, but we create a new "test" that runs
before the real tests whose action it is to install these files. The
installation is done by manual copies using a small helper script.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/2a039e8e-f31f-31e8-afe7-bab3130ad2de%40enterprisedb.com
|
|
|
|
|
|
|
|
|
| |
Commit 0a20ff54f split out the GUC variables from guc.c into a new file
guc_tables.c. This updates comments referencing guc.c regarding variables
which are now in guc_tables.c.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6B50C70C-8C1F-4F9A-A7C0-EEAFCC032406@yesql.se
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
force_parallel_mode is meant to be used to allow us to exercise the
parallel query infrastructure to ensure that it's working as we expect.
It seems some users think this GUC is for forcing the query planner into
picking a parallel plan regardless of the costs. A quick look at the
documentation would have made them realize that they were wrong, but the
GUC is likely too conveniently named which, evidently, seems to often
result in users expecting that it forces the planner into usefully
parallelizing queries.
Here we rename the GUC to something which casual users are less likely to
mistakenly think is what they need to make their query run more quickly.
For now, the old name can still be used. We'll revisit if the old name
mapping can be removed once the buildfarm configs are all updated.
Reviewed-by: John Naylor
Discussion: https://postgr.es/m/CAApHDvrsOi92_uA7PEaHZMH-S4Xv+MGhQWA+GrP8b1kjpS1HjQ@mail.gmail.com
|
|
|
|
|
|
|
| |
This simplifies a few lines of Perl test code a bit.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/846724b5-0723-f4c2-8b13-75301ec7509e@gmail.com
|
|
|
|
|
|
|
|
| |
It's likely worth adding some automated way of preventing further
omissions. We're discussing how to best do that.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20230117173509.GV9837@telsasoft.com
|