| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Presently, if an archive module's check_configured_cb callback
returns false, a generic WARNING message is emitted, which
unfortunately provides no actionable details about the reason why
the module is not configured. This commit introduces a macro that
archive module authors can use to add a DETAIL line to this WARNING
message.
Co-authored-by: Tung Nguyen
Reviewed-by: Daniel Gustafsson, Álvaro Herrera
Discussion: https://postgr.es/m/4109578306242a7cd5661171647e11b2%40oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Auto-generated array types, multirange types, and relation rowtypes
are treated as dependent objects: they can't be dropped separately
from the base object, nor can they have their own ownership or
permissions. We previously felt that, for objects that are in an
extension, only the base object needs to be listed as an extension
member in pg_depend. While that's sufficient to prevent inappropriate
drops, it results in undesirable answers if someone asks whether a
dependent type belongs to the extension. It looks like the dependent
type is just some random separately-created object that happens to
depend on the base object. Notably, this results in postgres_fdw
concluding that expressions involving an array type are not shippable
to the remote server, even when the defining extension has been
whitelisted.
To fix, cause GenerateTypeDependencies to make extension dependencies
for dependent types as well as their base objects, and adjust
ExecAlterExtensionContentsStmt so that object addition and removal
operations recurse to dependent types. The latter change means that
pg_upgrade of a type-defining extension will end with the dependent
type(s) now also listed as extension members, even if they were
not that way in the source database. Normally we want pg_upgrade
to precisely reproduce the source extension's state, but it seems
desirable to make an exception here.
This is arguably a bug fix, but we can't back-patch it since it
causes changes in the expected contents of pg_depend. (Because
it does, I've bumped catversion, even though there's no change
in the immediate post-initdb catalog contents.)
Tom Lane and David Geier
Discussion: https://postgr.es/m/4a847c55-489f-4e8d-a664-fc6b1cbe306f@gmail.com
|
|
|
|
|
|
| |
Apparently, I neglected to pgindent the prior commit.
Per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After XLOG_DBASE_CREATE_FILE_COPY, a correct incremental backup needs
to copy in full everything with the database and tablespace OID
mentioned in that record; but that record doesn't specifically mention
the blocks, or even the relfilenumbers, of the affected relations.
As a result, we were failing to copy data that we should have copied.
To fix, enter the DB OID and tablespace OID into the block reference
table with relfilenumber 0 and limit block 0; and treat that as a
limit block of 0 for every relfilenumber whose DB OID and tablespace
OID match.
Also, add a test case.
Patch by me, reviewed by Noah Misch.
Discussion: http://postgr.es/m/CA+Tgmob0xa=ByvGLMdAgkUZyVQE=r4nyYZ_VEa40FCfEDFnTKA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
After commit 53c2a97a9266, the code flow around the "retry" goto label
in GetMultiXactIdMembers was confused about what was possible: we never
return there with a held lock, so there's no point in testing for one.
This realization lets us simplify the code a bit. While at it, make the
scope of a couple of local variables in the same function a bit tighter.
Per Coverity.
|
|
|
|
|
|
|
|
|
| |
New code in 53c2a97a9266 uses direct array access to
shared->bank_locks[bankno].lock which can be made a little bit more
legible by using the SimpleLruGetBankLock helper function.
Nothing terribly serious, but let's add some clarity.
Discussion: https://postgr.es/m/202403041517.3a35jw53os65@alvherre.pgsql
|
|
|
|
|
| |
Fix for dbbca2cf29: "storage/shmem.h" is required with
-Dspinlocks=false.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The adminpack extension was only used to support pgAdmin III, which
in turn was declared EOL many years ago. Removing the extension also
allows us to remove functions from core as well which were only used
to support old version of adminpack.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/CALj2ACUmL5TraYBUBqDZBi1C+Re8_=SekqGYqYprj_W8wygQ8w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as determined by include-what-you-use (IWYU)
While IWYU also suggests to *add* a bunch of #include's (which is its
main purpose), this patch does not do that. In some cases, a more
specific #include replaces another less specific one.
Some manual adjustments of the automatic result:
- IWYU currently doesn't know about includes that provide global
variable declarations (like -Wmissing-variable-declarations), so
those includes are being kept manually.
- All includes for port(ability) headers are being kept for now, to
play it safe.
- No changes of catalog/pg_foo.h to catalog/pg_foo_d.h, to keep the
patch from exploding in size.
Note that this patch touches just *.c files, so nothing declared in
header files changes in hidden ways.
As a small example, in src/backend/access/transam/rmgr.c, some IWYU
pragma annotations are added to handle a special case there.
Discussion: https://www.postgresql.org/message-id/flat/af837490-6b2f-46df-ba05-37ea6a6653fc%40eisentraut.org
|
|
|
|
| |
It has been unused since we removed support for protocol version 2.
|
|
|
|
|
|
|
| |
The pid was originally used in error context of messages propagated
from parallel workers, but commit 292794f82b removed that. If the need
arises in the future, you can also get the pid with
"shm_mq_get_sender(pcxt->worker[i].error_mqh)->pid".
|
|
|
|
|
|
|
|
|
|
| |
Remove IsBackgroundWorker, IsAutoVacuumLauncherProcess(),
IsAutoVacuumWorkerProcess(), and IsLogicalSlotSyncWorker() in favor of
new Am*Process() macros that use MyBackendType. For consistency with
the existing Am*Process() macros.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0@iki.fi
|
|
|
|
|
|
|
| |
MyAuxProcType was redundant with MyBackendType.
Reviewed-by: Reid Thompson, Andres Freund
Discussion: https://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a similar effort to 413c18401, separate out the hot and cold paths in
GenerationAlloc() and SlabAlloc() to avoid having to setup the stack frame
for the hot path.
This additionally adjusts how we use the GenerationContext's freeblock.
Freeblock, when set, is now always empty and we only switch to using it
when the current allocation request finds the current block does not have
enough space and the freeblock is large enough to accomodate the
allocation.
This commit also adjusts GenerationFree() so that if we pfree the final
allocation in the current generation block, we now mark that block as
empty and keep it as the current block. Previously we free'd that block
and set the current block to NULL. Doing that meant we needed a special
case in GenerationAlloc to check if GenerationContext.block was NULL.
So this both reduces free/malloc calls and reduces the work done in
GenerationAlloc().
In passing, improve some comments in aset.c
Discussion: https://postgr.es/m/CAApHDvpHVSJqqb4B4OZLixr=CotKq-eKkbwZqvZVo_biYvUvQA@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
While working on 4c2369ac5, I noticed we went out of our way not to
support clauses on boolean partitioned tables in the form of "IS
UNKNOWN" and "IS NOT UNKNOWN". It's almost as much code to disallow
this as it is to allow it, so let's allow it.
Discussion: https://postgr.es/m/CAApHDvobKtcN6+xOuOfcutfp6T7jP=JPA9y3=MAEqnuKdDsQrw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test serves as a way to demonstrate how to use the features
introduced in 37b369dc67bc, while providing coverage for 7863ee4def65
that caused the startup process to throw "PANIC: could not locate a
valid checkpoint record" when starting recovery. The test checks that a
node is able to properly restart following a crash when a restart point
was finishing across a promotion, with an injection point added in the
middle of CreateRestartPoint() to stop the restartpoint in flight. Note
that this test fails when 7863ee4def65 is reverted.
Kyotaro Horiguchi is the original author of this test, that has been
originally posted on the thread where 7863ee4def65 was discussed. I
have just upgraded and polished it to rely on injection points, making
it much cheaper to reproduce the failure.
This test requires injection points to be enabled in the builds, hence
meson and ./configure need an update to pass this knowledge down to the
test. The name of the new injection point follows the same naming
convention as 6a1ea02c491d. The Makefile's EXTRA_INSTALL of recovery
TAP tests is updated to include modules/injection_points.
Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: Andrey Borodin, Bertrand Drouvot
Discussion: https://postgr.es/m/ZdLuxBk5hGpol91B@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that BackendId was just another index into the proc array, it was
redundant with the 0-based proc numbers used in other places. Replace
all usage of backend IDs with proc numbers.
The only place where the term "backend id" remains is in a few pgstat
functions that expose backend IDs at the SQL level. Those IDs are now
in fact 0-based ProcNumbers too, but the documentation still calls
them "backend ids". That term still seems appropriate to describe what
the numbers are, so I let it be.
One user-visible effect is that pg_temp_0 is now a valid temp schema
name, for backend with ProcNumber 0.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/8171f1aa-496f-46a6-afc3-c46fe7a9b407@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, backend ID was an index into the ProcState array, in the
shared cache invalidation manager (sinvaladt.c). The entry in the
ProcState array was reserved at backend startup by scanning the array
for a free entry, and that was also when the backend got its backend
ID. Things become slightly simpler if we redefine backend ID to be the
index into the PGPROC array, and directly use it also as an index to
the ProcState array. This uses a little more memory, as we reserve a
few extra slots in the ProcState array for aux processes that don't
need them, but the simplicity is worth it.
Aux processes now also have a backend ID. This simplifies the
reservation of BackendStatusArray and ProcSignal slots.
You can now convert a backend ID into an index into the PGPROC array
simply by subtracting 1. We still use 0-based "pgprocnos" in various
places, for indexes into the PGPROC array, but the only difference now
is that backend IDs start at 1 while pgprocnos start at 0. (The next
commmit will get rid of the term "backend ID" altogether and make
everything 0-based.)
There is still a 'backendId' field in PGPROC, now part of 'vxid' which
encapsulates the backend ID and local transaction ID together. It's
needed for prepared xacts. For regular backends, the backendId is
always equal to pgprocno + 1, but for prepared xact PGPROC entries,
it's the ID of the original backend that processed the transaction.
Reviewed-by: Andres Freund, Reid Thompson
Discussion: https://www.postgresql.org/message-id/8171f1aa-496f-46a6-afc3-c46fe7a9b407@iki.fi
|
|
|
|
|
| |
Per suggestion from Kyotaro Horiguchi
Discussion: https://postgr.es/m/20240229.130404.1411153273308142188.horikyota.ntt@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the past, FileRead() and FileWrite() used types based on the Unix
read() and write() functions from before C and POSIX standardization,
though not exactly (we had int for amount instead of unsigned). In
commit 2d4f1ba6 we changed to the appropriate standard C types, just
like the modern POSIX functions they wrap, but again not exactly: the
return type stayed as int. In theory, a ssize_t value could be returned
by the underlying call that is too large for an int.
That wasn't really a live bug, because we don't expect PostgreSQL code
to perform reads or writes of gigabytes, and OSes probably apply
internal caps smaller than that anyway. This change is done on the
principle that the return might as well follow the standard interfaces
consistently.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/1672202.1703441340%40sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit switches pg_enc2gettext_tbl[] in encnames.c to use a
C99-designated initializer syntax.
pg_bind_textdomain_codeset() is simplified so as it is possible to do
a direct lookup at the gettext() array with a value of the enum pg_enc
rather than doing a loop through all its elements, as long as the
encoding value provided by GetDatabaseEncoding() is in the correct range
of supported encoding values. Note that PG_MULE_INTERNAL gains a value
in the array, pointing to NULL.
Author: Jelte Fennema-Nio
Discussion: https://postgr.es/m/CAGECzQT3caUbcCcszNewCCmMbCuyP7XNAm60J3ybd6PN5kH2Dw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, this variable is an XLogRecPtr protected by a spinlock.
By converting it to an atomic variable, we can remove the spinlock,
which saves a small amount of shared memory space. Since this code
is not performance-critical, we use atomic operations with full
barrier semantics to make it easy to reason about correctness.
Author: John Morris
Reviewed-by: Michael Paquier, Robert Haas, Andres Freund, Stephen Frost, Bharath Rupireddy
Discussion: https://postgr.es/m/BYAPR13MB26772534335255E50318C574A0409%40BYAPR13MB2677.namprd13.prod.outlook.com
Discussion: https://postgr.es/m/MN2PR13MB2688FD8B757316CB5C54C8A2A0DDA%40MN2PR13MB2688.namprd13.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
| |
Commit bd5132db55 introduced new atomic read/write functions with
full barrier semantics, which are intended to simplify converting
non-performance-critical code to use atomic variables. This commit
demonstrates one such conversion.
Reviewed-by: Yong Li
Discussion: https://postgr.es/m/20231110205128.GB1315705%40nathanxps13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows the target relation of MERGE to be an auto-updatable or
trigger-updatable view, and includes support for WITH CHECK OPTION,
security barrier views, and security invoker views.
A trigger-updatable view must have INSTEAD OF triggers for every type
of action (INSERT, UPDATE, and DELETE) mentioned in the MERGE command.
An auto-updatable view must not have any INSTEAD OF triggers. Mixing
auto-update and trigger-update actions (i.e., having a partial set of
INSTEAD OF triggers) is not supported.
Rule-updatable views are also not supported, since there is no
rewriter support for non-SELECT rules with MERGE operations.
Dean Rasheed, reviewed by Jian He and Alvaro Herrera.
Discussion: https://postgr.es/m/CAEZATCVcB1g0nmxuEc-A+gGB0HnfcGQNGYH7gS=7rq0u0zOBXA@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
RangeTblEntry.funcordinality should be jumbled, because the WITH
ORDINALITY clause changes the query result.
This was apparently an oversight in the past.
Discussion: https://www.postgresql.org/message-id/flat/d7f421f8-fd6d-4759-adc3-247090a5d44b%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This field has been redundant ever since it was added by commit
25e777cf8e, which split up ExecUpdate() and ExecDelete() into reusable
pieces. The only place that reads it is ExecMergeMatched(), if the
result from ExecUpdateAct() is TM_Ok. However, all paths through
ExecUpdateAct() that return TM_Ok also set this field to true, so the
return status by itself is sufficient to tell if the update happened.
Removing this field is a modest simplification, and it brings the
UPDATE path in ExecMergeMatched() more into line with ExecUpdate(),
ensuring that ExecUpdateEpilogue() is always called if ExecUpdateAct()
returns TM_Ok, reducing the chance of bugs.
Dean Rasheed, reviewed by Alvaro Herrera.
Discussion: https://postgr.es/m/CAEZATCWGGmigGBzLHkJm5Ccv2mMxXmwi3%2Buq0yhwDHm-tsvSLg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
dsa_dump would print a large negative number instead of zero for
segment bin 0. Fix by explicitly checking for underflow and add
special case for bin 0. Backpatch to all supported versions.
Author: Ian Ilyasov <ianilyasov@outlook.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/GV1P251MB1004E0D09D117D3CECF9256ECD502@GV1P251MB1004.EURP251.PROD.OUTLOOK.COM
Backpatch-through: v12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure to set always-secure search path for both local and remote
connections during slot synchronization, so that malicious users can't
redirect user code (e.g. operators).
In the passing, improve the name of define, remove spurious return
statement, and a minor change in one of the comments.
Author: Bertrand Drouvot and Shveta Malik
Reviewed-by: Amit Kapila, Peter Smith
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
Discussion: https://postgr.es/m/ZdcejBDCr+wlVGnO@ip-10-97-1-34.eu-west-3.compute.internal
Discussion: https://postgr.es/m/CAJpy0uBNP=nrkNJkJSfF=jSocEh8vU2Owa8Rtpi=63fG=SvfVQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the case where the target timestamp is before the origin timestamp
and their difference is already an exact multiple of the stride, the
code incorrectly subtracted the stride anyway.
Also detect several integer-overflow cases that previously produced
bogus results. (The submitted patch tried to avoid overflow, but
I'm not convinced it's right, and problematic cases are so far out of
the plausibly-useful range that they don't seem worth sweating over.
Let's just use overflow-detecting arithmetic and throw errors.)
timestamp_bin() and timestamptz_bin() are basically identical and
so had identical bugs. Fix both.
Report and patch by Moaaz Assali, adjusted some by me. Back-patch
to v14 where date_bin() was introduced.
Discussion: https://postgr.es/m/CALkF+nvtuas-2kydG-WfofbRSJpyODAJWun==W-yO5j2R4meqA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
More precisely, what we do here is make the SLRU cache sizes
configurable with new GUCs, so that sites with high concurrency and big
ranges of transactions in flight (resp. multixacts/subtransactions) can
benefit from bigger caches. In order for this to work with good
performance, two additional changes are made:
1. the cache is divided in "banks" (to borrow terminology from CPU
caches), and algorithms such as eviction buffer search only affect
one specific bank. This forestalls the problem that linear searching
for a specific buffer across the whole cache takes too long: we only
have to search the specific bank, whose size is small. This work is
authored by Andrey Borodin.
2. Change the locking regime for the SLRU banks, so that each bank uses
a separate LWLock. This allows for increased scalability. This work
is authored by Dilip Kumar. (A part of this was previously committed as
d172b717c6f4.)
Special care is taken so that the algorithms that can potentially
traverse more than one bank release one bank's lock before acquiring the
next. This should happen rarely, but particularly clog.c's group commit
feature needed code adjustment to cope with this. I (Álvaro) also added
lots of comments to make sure the design is sound.
The new GUCs match the names introduced by bcdfa5f2e2f2 in the
pg_stat_slru view.
The default values for these parameters are similar to the previous
sizes of each SLRU. commit_ts, clog and subtrans accept value 0, which
means to adjust by dividing shared_buffers by 512 (so 2MB for every 1GB
of shared_buffers), with a cap of 8MB. (A new slru.c function
SimpleLruAutotuneBuffers() was added to support this.) The cap was
previously 1MB for clog, so for sites with more than 512MB of shared
memory the total memory used increases, which is likely a good tradeoff.
However, other SLRUs (notably multixact ones) retain smaller sizes and
don't support a configured value of 0. These values based on
shared_buffers may need to be revisited, but that's an easy change.
There was some resistance to adding these new GUCs: it would be better
to adjust to memory pressure automatically somehow, for example by
stealing memory from shared_buffers (where the caches can grow and
shrink naturally). However, doing that seems to be a much larger
project and one which has made virtually no progress in several years,
and because this is such a pain point for so many users, here we take
the pragmatic approach.
Author: Andrey Borodin <x4mmm@yandex-team.ru>
Author: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Amul Sul, Gilles Darold, Anastasia Lubennikova,
Ivan Lazarev, Robert Haas, Thomas Munro, Tomas Vondra,
Yura Sokolov, Васильев Дмитрий (Dmitry Vasiliev).
Discussion: https://postgr.es/m/2BEC2B3F-9B61-4C1D-9FB5-5FAB0F05EF86@yandex-team.ru
Discussion: https://postgr.es/m/CAFiTN-vzDvNz=ExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There isn't a lot of user demand for AIX support, we have a bunch of
hacks to work around AIX-specific compiler bugs and idiosyncrasies,
and no one has stepped up to the plate to properly maintain it.
Remove support for AIX to get rid of that maintenance overhead. It's
still supported for stable versions.
The acute issue that triggered this decision was that after commit
8af2565248, the AIX buildfarm members have been hitting this
assertion:
TRAP: failed Assert("(uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)"), File: "md.c", Line: 472, PID: 2949728
Apperently the "pg_attribute_aligned(a)" attribute doesn't work on AIX
for values larger than PG_IO_ALIGN_SIZE, for a static const variable.
That could be worked around, but we decided to just drop the AIX support
instead.
Discussion: https://www.postgresql.org/message-id/20240224172345.32@rfd.leadboat.com
Reviewed-by: Andres Freund, Noah Misch, Thomas Munro
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new names are intended to match those in an upcoming patch that adds
a few GUCs to configure the SLRU buffer sizes.
Backwards compatibility concern: this changes the accepted names for
function pg_stat_slru_rest(). Since this function recognizes "any other
string" as a request to reset the entry for "other", this means that
calling it with the old names would silently reset "other" instead of
doing nothing or throwing an error.
Reviewed-by: Andrey M. Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/202402261616.dlriae7b6emv@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
| |
This has not been needed since 9d77708d83ee where there was a loop to
print all the possible GUC groups, relying on the last element to be
NULL.
Author: Japin Li
Reviewed-By: Jelte Fennema-Nio
Discussion: https://postgr.es/m/CAGECzQT3caUbcCcszNewCCmMbCuyP7XNAm60J3ybd6PN5kH2Dw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allocating from a free list or from a block which contains enough space
already, we deem to be common code paths and want to optimize for those.
Having to allocate a new block, either a normal block or a dedicated one
for a large allocation, we deem to be less common, therefore we class
that as "cold". Both cold paths require a malloc so are going to be
slower as a result of that regardless.
The main motivation here is to remove the calls to malloc() in the hot
path and because of this, the compiler is now free to not bother setting
up the stack frame in AllocSetAlloc(), thus making the hot path much
cheaper.
Author: Andres Freund
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/20210719195950.gavgs6ujzmjfaiig@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
| |
This is in the same spirit as ef5e2e90859a, updating this time some
arrays in parser.c, relpath.c, guc_tables.c and pg_dump_sort.c so as the
order of their elements has no need to match the enum structures they
are based on anymore.
Author: Jelte Fennema-Nio
Reviewed-by: Jian He, Japin Li
Discussion: https://postgr.es/m/CAGECzQT3caUbcCcszNewCCmMbCuyP7XNAm60J3ybd6PN5kH2Dw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A recent commit added a copy_function member to the
dshash_parameters struct, but it missed updating a couple of
comments that refer to the function pointer members of this struct.
One of those comments also refers to a tranche_name member and non-
arg variants of the function pointer members, all of which were
either removed during development or removed shortly after dshash
table support was committed.
Oversights in commits 8c0d7bafad, d7694fc148, and 42a1de3013.
Discussion: https://postgr.es/m/20240227045213.GA2329190%40nathanxps13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a followup to commit 66ea94e8e6.
Error mssages concerning incorrect formats for date-time types are
unified and parameterized, instead of using a fully separate error
message for each type.
Similarly, error messages regarding numeric and string arguments to
certain items are standardized, and instead of saying that the argument
is out of range simply say that it is invalid. The actual invalid
arguments to these itesm are now shown in the error message.
Error messages relating to numeric inputs of Nan or Infinity are
made more informative.
Jeevan Chalke and Kyotaro Horiguchi, with some input from Tom Lane.
Discussion: https://postgr.es/m/20240129.121200.235012930453045390.horikyota.ntt@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
object_classes[] provided unnecessary indirection between catalog OIDs
and the enum ObjectClass when calling add_object_address(). This array
has been originally introduced in 30ec31604d5 and was useful because not
all relation OIDs were compile-time constants back then, which has not
been the case for a long time now for all the elements of ObjectClass.
This commit removes object_classes[], switching to the catalog OIDs
when calling add_object_address(). This shaves some code while saving
in maintenance because it was necessary to maintain the enum ObjectClass
and the array in sync when adding new object types.
Reported-by: Jeff Davis
Author: Jelte Fennema-Nio
Reviewed-by: Jian He, Michael Paquier
Discussion: https://postgr.es/m/CAGECzQT3caUbcCcszNewCCmMbCuyP7XNAm60J3ybd6PN5kH2Dw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many modern compilers are able to optimize function calls to functions
where the parameters of the called function match a leading subset of
the calling function's parameters. If there are no instructions in the
calling function after the function is called, then the compiler is free
to avoid any stack frame setup and implement the function call as a
"jmp" rather than a "call". This is called sibling call optimization.
Here we adjust the memory allocation functions in mcxt.c to allow this
optimization. This requires moving some responsibility into the memory
context implementations themselves. It's now the responsibility of the
MemoryContext to check for malloc failures. This is good as it both
allows the sibling call optimization, but also because most small and
medium allocations won't call malloc and just allocate memory to an
existing block. That can't fail, so checking for NULLs in that case
isn't required.
Also, traditionally it's been the responsibility of palloc and the other
allocation functions in mcxt.c to check for invalid allocation size
requests. Here we also move the responsibility of checking that into the
MemoryContext. This isn't to allow the sibling call optimization, but
more because most of our allocators handle large allocations separately
and we can just add the size check when doing large allocations. We no
longer check this for non-large allocations at all.
To make checking the allocation request sizes and ERROR handling easier,
add some helper functions to mcxt.c for the allocators to use.
Author: Andres Freund
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/20210719195950.gavgs6ujzmjfaiig@alap3.anarazel.de
|
|
|
|
|
|
|
|
| |
One comment mentioned indexes, but the relation opened should be
sequences.
Reported-by: Matthias van de Meent
Discussion: https://postgr.es/m/CAEze2WiMGNG9XK3NSUen-5BARhCnP=u=FXnf8pvpL2qDKeOsZg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Presently, string keys are not well-supported for dshash tables.
The dshash code always copies key_size bytes into new entries'
keys, and dshash.h only provides compare and hash functions that
forward to memcmp() and tag_hash(), both of which do not stop at
the first NUL. This means that callers must pad string keys so
that the data beyond the first NUL does not adversely affect the
results of copying, comparing, and hashing the keys.
To better support string keys in dshash tables, this commit does
a couple things:
* A new copy_function field is added to the dshash_parameters
struct. This function pointer specifies how the key should be
copied into new table entries. For example, we only want to copy
up to the first NUL byte for string keys. A dshash_memcpy()
helper function is provided and used for all existing in-tree
dshash tables without string keys.
* A set of helper functions for string keys are provided. These
helper functions forward to strcmp(), strcpy(), and
string_hash(), all of which ignore data beyond the first NUL.
This commit also adjusts the DSM registry's dshash table to use the
new helper functions for string keys.
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/20240119215941.GA1322079%40nathanxps13
|
|
|
|
|
|
|
|
|
| |
A couple of dshash_create() callers provide 0 for the 'void *arg'
argument, which might give readers the incorrect impression that
this is some sort of "flags" parameter.
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/20240119215941.GA1322079%40nathanxps13
|
|
|
|
| |
Pretty boring.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Information of sequences is cached for each backend for currval() and
nextval(), and the update of some cached information was mixed in the
middle of computations based on the other properties of a sequence, for
the increment value in nextval() and the cached state when altering a
sequence.
Grouping them makes the code easier to follow and to refactor in the
future, when splitting the computation and the SeqTable change parts.
Note that the cached data is untouched between the areas where these
cache updates are moved.
Issue noticed while doing some refactoring of the sequence code.
Author: Michael Paquier
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/ZWlohtKAs0uVVpZ3@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similarly to tables and indexes, these functions are able to open
relations with a sequence relkind, which is useful to make a distinction
with the other relation kinds. Previously, commands/sequence.c used a
mix of table_{close,open}() and relation_{close,open}() routines when
manipulating sequence relations, so this clarifies the code.
A direct effect of this change is to align the error messages produced
when attempting DDLs for sequences on relations with an unexpected
relkind, like a table or an index with ALTER SEQUENCE, providing an
extra error detail about the relkind of the relation used in the DDL
query.
Author: Michael Paquier
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/ZWlohtKAs0uVVpZ3@paquier.xyz
|
|
|
|
|
| |
Not only did the format placeholder not match the variable, the
variable also didn't match the function it was getting its value from.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When this assertion was installed (in commit d2f60a3ab), I thought
it was only for catching server logic errors that caused accesses to
catalogs that were undergoing index rebuilds. However, it will also
fire in case of a user-defined index expression that attempts to
access its own table. We occasionally see reports of people trying
to do that, and typically getting unintelligible low-level errors
as a result. We can provide a more on-point message by making this
a regular runtime check.
While at it, adjust the similar error check in
systable_beginscan_ordered to use the same message text. That one
is (probably) not reachable without a coding bug, but we might as
well use a translatable message if we have one.
Per bug #18363 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18363-e3598a5a572d0699@postgresql.org
|
|
|
|
| |
Reported-by: Alexander Lakhin
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit introduces a new field 'sublevels_up' in ReplaceVarnoContext,
and enhances replace_varno_walker() to:
1) recurse into subselects with sublevels_up increased, and
2) perform the replacement only when varlevelsup is equal to sublevels_up.
This commit also fixes some outdated comments. And besides adding relevant
test cases, it makes some unification over existing SJE test cases.
Discussion: https://postgr.es/m/CAMbWs4-%3DPO6Mm9gNnySbx0VHyXjgnnYYwbN9dth%3DTLQweZ-M%2Bg%40mail.gmail.com
Author: Richard Guo
Reviewed-by: Andrei Lepikhov, Alexander Korotkov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
build_child_join_sjinfo creates a derived SpecialJoinInfo in
the short-lived GEQO context, but afterwards the semi_rhs_exprs
from that may be used in a UniquePath for a child base relation.
This breaks the expectation that all base-relation-level structures
are in the planning-lifespan context, leading to use of a dangling
pointer with probable ensuing crash later on in create_unique_plan.
To fix, copy the expression trees when making a UniquePath.
Per bug #18360 from Alexander Lakhin. This has been broken since
partitionwise joins were added, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/18360-a23caf3157f34e62@postgresql.org
|