| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
| |
Error messages for creating a foreign key on a partitioned table using
ONLY or NOT VALID were wrong in mentioning the objects they worked on.
This commit adds on the way some regression tests missing for those
cases.
Author: Laurenz Albe
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/c11c05810a9ed65e9b2c817a9ef442275a32fe80.camel@cybertec.at
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the wake of commit f2343653f, we no longer need some fields that
were used before to control executor lock acquisitions:
* PlannedStmt.nonleafResultRelations can go away entirely.
* partitioned_rels can go away from Append, MergeAppend, and ModifyTable.
However, ModifyTable still needs to know the RT index of the partition
root table if any, which was formerly kept in the first entry of that
list. Add a new field "rootRelation" to remember that. rootRelation is
partly redundant with nominalRelation, in that if it's set it will have
the same value as nominalRelation. However, the latter field has a
different purpose so it seems best to keep them distinct.
Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2fbdf1b38bc changed the order in which we inserted catalog rows
when creating partitions, so that we could remove an unsightly hack
required for untimely relcache invalidations. However, that commit only
changed the ordering for CREATE TABLE PARTITION OF, and left ALTER TABLE
ATTACH PARTITION unchanged, so the latter can be affected when catalog
invalidations occur, for instance when the partition key involves an SQL
function.
Reported-by: Rajkumar Raghuwanshi
Author: Amit Langote
Reviewed-by: Michaël Paquier
Discussion: https://postgr.es/m/CAKcux6=nTz9KSfTr_6Z2mpzLJ_09JN-rK6=dWic6gGyTSWueyQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Index DDL cascading on partitioned tables introduced a way for ALTER
TABLE to be called reentrantly. This caused an an important deficiency
in event trigger support to be exposed: on exiting the reentrant call,
the alter table state object was clobbered, causing a crash when the
outer alter table tries to finalize its processing. Fix the crash by
creating a stack of event trigger state objects. There are still ways
to cause things to misbehave (and probably other crashers) with more
elaborate tricks, but at least it now doesn't crash in the obvious
scenario.
Backpatch to 9.5, where DDL deparsing of event triggers was introduced.
Reported-by: Marco Slot
Authors: Michaël Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CANNhMLCpi+HQ7M36uPfGbJZEQLyTy7XvX=5EFkpR-b1bo0uJew@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 9a3cebeaa changed things so that parallel workers didn't obtain
any lock of their own on tables they access. That was clearly a bad
idea, but I'd mistakenly supposed that it was the intended end result
of the series of patches for simplifying the executor's lock management.
Undo that change in relation_open(), and adjust ExecOpenScanRelation()
so that it gets the correct lock if inside a parallel worker.
In passing, clean up some more obsolete comments about when locks
are acquired.
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We already have appropriate locks on every relation listed in the
query's rangetable before we reach the executor. Take the next step
in exploiting that knowledge by removing code that worries about
taking locks on non-leaf result relations in a partitioned table.
In particular, get rid of ExecLockNonLeafAppendTables and a stanza in
InitPlan that asserts we already have locks on certain such tables.
In passing, clean up some now-obsolete comments in InitPlan.
Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
| |
Places that aren't testing for sign should not use the more expensive
function; it's just wasteful, not to mention being a cognitive load
for readers who may know what isinf() is but not is_infinite().
As things stand, we actually don't need is_infinite() anyplace except
float4out/float8out, which means it could potentially go away altogether
after the changes I proposed in <13178.1538794717@sss.pgh.pa.us>.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, a worker process would establish values for these based on
its own start time. In v10 and up, this can trivially be shown to cause
misbehavior of transaction_timestamp(), timestamp_in(), and related
functions which are (perhaps unwisely?) marked parallel-safe. It seems
likely that other behaviors might diverge from what happens in the parent
as well.
It's not as trivial to demonstrate problems in 9.6 or 9.5, but I'm sure
it's still possible, so back-patch to all branches containing parallel
worker infrastructure.
In HEAD only, mark now() and statement_timestamp() as parallel-safe
(other affected functions already were). While in theory we could
still squeeze that change into v11, it doesn't seem important enough
to force a last-minute catversion bump.
Konstantin Knizhnik, whacked around a bit by me
Discussion: https://postgr.es/m/6406dbd2-5d37-4cb6-6eb2-9c44172c7e7c@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When computing statistical aggregates like variance, the common
schoolbook algorithm which computes the sum of the squares of the
values and subtracts the square of the mean can lead to a large loss
of precision when using floating point arithmetic, because the
difference between the two terms is often very small relative to the
terms themselves.
To avoid this, re-work these aggregates to use the Youngs-Cramer
algorithm, which is a proven, numerically stable algorithm that
directly aggregates the sum of the squares of the differences of the
values from the mean in a single pass over the data.
While at it, improve the test coverage to test the aggregate combine
functions used during parallel aggregation.
Per report and suggested algorithm from Erich Schubert.
Patch by me, reviewed by Madeleine Thompson.
Discussion: https://postgr.es/m/153313051300.1397.9594490737341194671@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is for example used when attaching a partition to a partitioned
table which includes foreign keys, and in this case the constraint name
has been missing in the data cloned. This could lead to hard crashes,
as when validating the foreign key constraint, the constraint name is
always expected. Particularly, when using log_min_messages >= DEBUG1, a
log message would be generated with this unassigned constraint name,
leading to an assertion failure on HEAD.
While on it, rename a variable in ATExecAttachPartition which was
declared twice with the same name.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20181005042236.GG1629@paquier.xyz
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result. However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions. Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.
The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.
To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN". That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.
This is a longstanding portability hazard, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This lists the contents of a temporary directory associated to a given
tablespace, useful to get information about on-disk consumption caused
by temporary files used by a session query. By default, pg_default is
scanned, and a tablespace can be specified as argument.
This function is intended to be used by monitoring tools, and, unlike
pg_ls_dir(), access to them can be granted to non-superusers so that
those monitoring tools can observe the principle of least privilege.
Access is also given by default to members of pg_monitor.
Author: Nathan Bossart
Reviewed-by: Laurenz Albe
Discussion: https://postgr.es/m/92F458A2-6459-44B8-A7F2-2ADD3225046A@amazon.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of doing a lot of list_nth() accesses to es_range_table,
create a flattened pointer array during executor startup and index
into that to get at individual RangeTblEntrys.
This eliminates one source of O(N^2) behavior with lots of partitions.
(I'm not exactly convinced that it's the most important source, but
it's an easy one to fix.)
Amit Langote and David Rowley
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Create an array estate->es_relations[] paralleling the es_range_table,
and store references to Relations (relcache entries) there, so that any
given RT entry is opened and closed just once per executor run. Scan
nodes typically still call ExecOpenScanRelation, but ExecCloseScanRelation
is no more; relation closing is now done centrally in ExecEndPlan.
This is slightly more complex than one would expect because of the
interactions with relcache references held in ResultRelInfo nodes.
The general convention is now that ResultRelInfo->ri_RelationDesc does
not represent a separate relcache reference and so does not need to be
explicitly closed; but there is an exception for ResultRelInfos in the
es_trig_target_relations list, which are manufactured by
ExecGetTriggerResultRel and have to be cleaned up by
ExecCleanUpTriggerState. (That much was true all along, but these
ResultRelInfos are now more different from others than they used to be.)
To allow the partition pruning logic to make use of es_relations[] rather
than having its own relcache references, adjust PartitionedRelPruneInfo
to store an RT index rather than a relation OID.
Amit Langote, reviewed by David Rowley and Jesper Pedersen,
some mods by me
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
| |
When using the CREATE TABLE .. PARTITION OF syntax, it's possible to
cause a partition to get two primary keys if the parent already has one.
Tighten the check to disallow that.
Reported-by: Rajkumar Raghuwanshi
Author: Amul Sul
Discussion: https://postgr.es/m/CAKcux6=OnSV3-qd8Gb6W=KPPwcCz6Fe_O_MQYjTa24__Xn8XxA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
This moves the system administration functions for signalling backends
from backend/utils/adt/misc.c into a separate file dedicated to backend
signalling. No new functionality is introduced in this commit.
Author: Daniel Gustafsson
Reviewed-by: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/C2C7C3EC-CC5F-44B6-9C78-637C88BD7D14@yesql.se
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When specified, this option allows VACUUM to skip the work on a relation
if there is a conflicting lock on it when trying to open it at the
beginning of its processing.
Similarly to autovacuum, this comes with a couple of limitations while
the relation is processed which can cause the process to still block:
- when opening the relation indexes.
- when acquiring row samples for table inheritance trees, partition trees
or certain types of foreign tables, and that a lock is taken on some
leaves of such trees.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Andres Freund, Masahiko Sawada
Discussion: https://postgr.es/m/9EF7EBE4-720D-4CF1-9D0E-4403D7E92990@amazon.com
Discussion: https://postgr.es/m/20171201160907.27110.74730@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of locking tables during executor startup, just Assert that
suitable locks were obtained already during the parse/plan pipeline
(or re-obtained by the plan cache). This must be so, else we have a
hazard that concurrent DDL has invalidated the plan.
This is pretty inefficient as well as undercommented, but it's all going
to go away shortly, so I didn't try hard. This commit is just another
attempt to use the buildfarm to see if we've missed anything in the plan
to simplify the executor's table management.
Note that the change needed here in relation_open() exposes that
parallel workers now really are accessing tables without holding any
lock of their own, whereas they were not doing that before this commit.
This does not give me a warm fuzzy feeling about that aspect of parallel
query; it does not seem like a good design, and we now know that it's
had exactly no actual testing. I think that we should modify parallel
query so that that change can be reverted.
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I (Andres) was more than a bit hasty in committing 33001fd7a7072d48327
after last minute changes, leading to a number of problems (jit output
was only shown for JIT in parallel workers, and just EXPLAIN without
ANALYZE didn't work). Lukas luckily found these issues quickly.
Instead of combining instrumentation in in standard_ExecutorEnd(), do
so on demand in the new ExplainPrintJITSummary().
Also update a documentation example of the JIT output, changed in
52050ad8ebec8d831.
Author: Lukas Fittl, with minor changes by me
Discussion: https://postgr.es/m/CAP53PkxmgJht69pabxBXJBM+0oc6kf3KHMborLP7H2ouJ0CCtQ@mail.gmail.com
Backpatch: 11, where JIT compilation was introduced
|
|
|
|
|
|
|
|
|
|
|
|
| |
The API (EOH_flatten_into) that flattens the expanded value representation
expects the target address to be maxaligned. All it's usage adhere to that
principle except when serializing datums for parallel query. Fix that
usage.
Diagnosed-by: Tom Lane
Author: Tom Lane and Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/11629.1536550032@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of recomputing the required lock levels in all these places,
just use what commit fdba460a2 made the parser store in the RTE fields.
This already simplifies the code measurably in these places, and
follow-on changes will remove a bunch of no-longer-needed infrastructure.
In a few cases, this change causes us to acquire a higher lock level
than we did before. This is OK primarily because said higher lock level
should've been acquired already at query parse time; thus, we're saving
a useless extra trip through the shared lock manager to acquire a lesser
lock alongside the original lock. The only known exception to this is
that re-execution of a previously planned SELECT FOR UPDATE/SHARE query,
for a table that uses ROW_MARK_REFERENCE or ROW_MARK_COPY methods, might
have gotten only AccessShareLock before. Now it will get RowShareLock
like the first execution did, which seems fine.
While there's more to do, push it in this state anyway, to let the
buildfarm help verify that nothing bad happened.
Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's inefficient to use a single slot for mapping between tuple
descriptors for multiple tuples, as previously done when using
ConvertPartitionTupleSlot(), as that means the slot's tuple descriptors
change for every tuple.
Previously we also, via ConvertPartitionTupleSlot(), built new tuples
after the mapping even in cases where we, immediately afterwards,
access individual columns again.
Refactor the code so one slot, on demand, is used for each
partition. That avoids having to change the descriptor (and allows to
use the more efficient "fixed" tuple slots). Then use slot->slot
mapping, to avoid unnecessarily forming a tuple.
As the naming between the tuple and slot mapping functions wasn't
consistent, rename them to execute_attr_map_{tuple,slot}. It's likely
that we'll also rename convert_tuples_by_* to denote that these
functions "only" build a map, but that's left for later.
Author: Amit Khandekar and Amit Langote, editorialized by me
Reviewed-By: Amit Langote, Amit Khandekar, Andres Freund
Discussion:
https://postgr.es/m/CAJ3gD9fR0wRNeAE8VqffNTyONS_UfFPRpqxhnD9Q42vZB+Jvpg@mail.gmail.com
https://postgr.es/m/e4f9d743-cd4b-efb0-7574-da21d86a7f36%40lab.ntt.co.jp
Backpatch: -
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The variants of these functions that take numeric inputs (OIDs or
column numbers) are supposed to return NULL rather than failing
on bad input; this rule reduces problems with snapshot skew when
queries apply the functions to all rows of a catalog.
has_column_privilege() had careless handling of the case where the
table OID didn't exist. You might get something like this:
select has_column_privilege(9999,'nosuchcol','select');
ERROR: column "nosuchcol" of relation "(null)" does not exist
or you might get a crash, depending on the platform's printf's response
to a null string pointer.
In addition, while applying the column-number variant to a dropped
column returned NULL as desired, applying the column-name variant
did not:
select has_column_privilege('mytable','........pg.dropped.2........','select');
ERROR: column "........pg.dropped.2........" of relation "mytable" does not exist
It seems better to make this case return NULL as well.
Also, the OID-accepting variants of has_foreign_data_wrapper_privilege,
has_server_privilege, and has_tablespace_privilege didn't follow the
principle of returning NULL for nonexistent OIDs. Superusers got TRUE,
everybody else got an error.
Per investigation of Jaime Casanova's report of a new crash in HEAD.
These behaviors have been like this for a long time, so back-patch to
all supported branches.
Patch by me; thanks to Stephen Frost for discussion and review
Discussion: https://postgr.es/m/CAJGNTeP=-6Gyqq5TN9OvYEydi7Fv1oGyYj650LGTnW44oAzYCg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VACUUM and ANALYZE share similar logic when it comes to opening a
relation to work on in terms of how the relation is opened, in which
order locks are tried and how logs should be generated when something
does not work as expected.
This commit refactors things so as both use the same code path to handle
the way a relation is opened, so as the integration of new options
becomes easier.
Author: Michael Paquier
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/20180927075152.GT1659@paquier.xyz
|
|
|
|
|
|
|
| |
This was claimed to have been done in
0a63f996e018ac508c858e87fa39cc254a5db49f, but that actually only
changed the documentation and not the grammar. (That commit did fully
change it for CREATE TRIGGER.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Opening a relation with no lock at all is unsafe; there's no guarantee
that we'll see a consistent state of the relevant catalog entries.
While use of MVCC scans to read the catalogs partially addresses that
complaint, it's still possible to switch to a new catalog snapshot
partway through loading the relcache entry. Moreover, whether or not
you trust the reasoning behind sometimes using less than
AccessExclusiveLock for ALTER TABLE, that reasoning is certainly not
valid if concurrent users of the table don't hold a lock corresponding
to the operation they want to perform.
Hence, add some assertion-build-only checks that require any caller
of relation_open(x, NoLock) to hold at least AccessShareLock. This
isn't a full solution, since we can't verify that the lock level is
semantically appropriate for the action --- but it's definitely of
some use, because it's already caught two bugs.
We can also assert that callers of addRangeTableEntryForRelation()
hold at least the lock level specified for the new RTE.
Amit Langote and Tom Lane
Discussion: https://postgr.es/m/16565.1538327894@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the column being modified is referenced by a foreign key constraint
of another table, ALTER TABLE would open the other table (to re-parse
the constraint's definition) without having first obtained a lock on it.
This was evidently intentional, but that doesn't mean it's really safe.
It's especially not safe in 9.3, which pre-dates use of MVCC scans for
catalog reads, but even in current releases it doesn't seem like a good
idea.
We know we'll need AccessExclusiveLock shortly to drop the obsoleted
constraint, so just get that a little sooner to close the hole.
Per testing with a patch that complains if we open a relation without
holding any lock on it. I don't plan to back-patch that patch, but we
should close the holes it identifies in all supported branches.
Discussion: https://postgr.es/m/2038.1538335244@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add RangeTblEntry.rellockmode, which records the appropriate lock mode for
each RTE_RELATION rangetable entry (either AccessShareLock, RowShareLock,
or RowExclusiveLock depending on the RTE's role in the query).
This patch creates the field and makes all creators of RTE nodes fill it
in reasonably, but for the moment nothing much is done with it. The plan
is to replace assorted post-parser logic that re-determines the right
lockmode to use with simple uses of rte->rellockmode. For now, just add
Asserts in each of those places that the rellockmode matches what they are
computing today. (In some cases the match isn't perfect, so the Asserts
are weaker than you might expect; but this seems OK, as per discussion.)
This passes check-world for me, but it seems worth pushing in this state
to see if the buildfarm finds any problems in cases I failed to test.
catversion bump due to change of stored rules.
Amit Langote, reviewed by David Rowley and Jesper Pedersen,
and whacked around a bit more by me
Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The connection authorized message has quite a bit of useful information
in it, but didn't include the application_name (when provided), so let's
add that as it can be very useful.
Note that at the point where we're emitting the connection authorized
message, we haven't processed GUCs, so it's not possible to get this by
using log_line_prefix (which pulls from the GUC). There's also
something to be said for having this included in the connection
authorized message and then not needing to repeat it for every line, as
having it in log_line_prefix would do.
The GUC cleans the application name to pure-ascii, so do that here too,
but pull out the logic for cleaning up a string into its own function
in common and re-use it from those places, and check_cluster_name which
was doing the same thing.
Author: Don Seiler <don@seiler.us>
Discussion: https://postgr.es/m/CAHJZqBB_Pxv8HRfoh%2BAB4KxSQQuPVvtYCzMg7woNR3r7dfmopw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Give a specific error complaining about lack of posix_fadvise() when
someone tries to set effective_io_concurrency > 0 on platforms
without that.
This probably isn't worth extensive back-patching, but I (tgl) felt
cramming it into v11 was reasonable.
James Robinson
Discussion: https://postgr.es/m/153771876450.14994.560017943128223619@wrigleys.postgresql.org
Discussion: https://postgr.es/m/A3942987-5BC7-4F05-B54D-2A0EC2914B33@jlr-photo.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the checkpointer receives a SIGHUP signal to update its configuration,
it may need to update the shared memory for full_page_writes and need to
write a WAL record for it. Now, it is quite possible that the XLOG
machinery has not been initialized by that time and it will lead to
assertion failure while doing that. Fix is to allow the initialization of
the XLOG machinery outside critical section.
This bug has been introduced by the commit 2c03216d83 which added the XLOG
machinery initialization in RecoveryInProgress code path.
Reported-by: Dilip Kumar
Author: Dilip Kumar
Reviewed-by: Michael Paquier and Amit Kapila
Backpatch-through: 9.5
Discussion: https://postgr.es/m/CAFiTN-u4BA8KXcQUWDPNgaKAjDXC=C2whnzBM8TAcv=stckYUw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A restart point or a checkpoint recycling WAL segments treats segments
marked with neither ".done" (archiving is done) or ".ready" (segment is
ready to be archived) in archive_status the same way for archive_mode
being "on" or "always". While for a primary this is fine, a standby
running a restart point with archive_mode = on would try to mark such a
segment as ready for archiving, which is something that will never
happen except after the standby is promoted.
Note that this problem applies only to WAL segments coming from the
local pg_wal the first time archive recovery is run. Segments part of a
self-contained base backup are the most common case where this could
happen, however even in this case normally the .done markers would be
most likely part of the backup. Segments recovered from an archive are
marked as .ready or .done by the startup process, and segments finished
streaming are marked as such by the WAL receiver, so they are handled
already.
Reported-by: Haruka Takatsuka
Author: Michael Paquier
Discussion: https://postgr.es/m/15402-a453c90ed4cf88b2@postgresql.org
Backpatch-through: 9.5, where archive_mode = always has been added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It failed if passed a nonexistent relation OID, or one that was a non-heap
relation, because of blindly applying heap_open to a user-supplied OID.
This is not OK behavior for a SQL-exposed function; we have a project
policy that we should return NULL in such cases. Moreover, since
pg_get_partition_constraintdef ought now to work on indexes, restricting
it to heaps is flat wrong anyway.
The underlying function generate_partition_qual() wasn't on board with
indexes having partition quals either, nor for that matter with rels
having relispartition set but yet null relpartbound. (One wonders
whether the person who wrote the function comment blocks claiming that
these functions allow a missing relpartbound had ever tested it.)
Fix by testing relispartition before opening the rel, and by using
relation_open not heap_open. (If any other relkinds ever grow the
ability to have relispartition set, the code will work with them
automatically.) Also, don't reject null relpartbound in
generate_partition_qual.
Back-patch to v11, and all but the null-relpartbound change to v10.
(It's not really necessary to change generate_partition_qual at all
in v10, but I thought s/heap_open/relation_open/ would be a good
idea anyway just to keep the code in sync with later branches.)
Per report from Justin Pryzby.
Discussion: https://postgr.es/m/20180927200020.GJ776@telsasoft.com
|
| |
|
|
|
|
|
| |
Author: Mark Dilger
Discussion: https://postgr.es/m/B2AEFCD0-836D-4654-9D59-3DF616E0A6F3%40gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous commit wasn't careful enough to remove all traces of
TupleDescGetSlot().
Besides fixing the oversight of not removing TupleDescGetSlot()'s
declaration, this also removes FuncCallContext->slot. That was
documented to be for use in combination with TupleDescGetSlot(), a
cursory search over extensions finds no users, and there doesn't seem
to be convincing reasons to keep it around. If we later in the v12
release cycle find users, we can re-consider this part of the commit.
Reported-By: Michael Paquier
Discussion: https://postgr.es/m/20180926000413.GC1659@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
| |
Those previously used bool, which should be safe on any modern
platforms, however the C standard is clear that it is better to use
sig_atomic_t for variables manipulated in signal handlers. This commit
adds at the same time PGDLLIMPORT to ClientConnectionLost.
Author: Michael Paquier
Reviewed-by: Tom Lane, Chris Travers, Andres Freund
Discussion: https://postgr.es/m/20180925011311.GD1354@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a table ownership is changed, we must apply that also to any owned
sequences. (Otherwise, it would result in a situation that cannot be
restored, because linked sequences must have the same owner as the
table.) But this was previously only applied to regular tables and
materialized views. But it should also apply to at least foreign
tables. This patch removes the relkind check altogether, because it
doesn't save very much and just introduces the possibility of similar
omissions.
Bug: #15238
Reported-by: Christoph Berg <christoph.berg@credativ.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I started out with the idea that we needed to detect use of %m format specs
in contexts other than elog/ereport calls, because we couldn't rely on that
working in *printf calls. But a better answer is to fix things so that it
does work. Now that we're using snprintf.c all the time, we can implement
%m in that and we've fixed the problem.
This requires also adjusting our various printf-wrapping functions so that
they ensure "errno" is preserved when they call snprintf.c.
Remove elog.c's handmade implementation of %m, and let it rely on
snprintf to support the feature. That should provide some performance
gain, though I've not attempted to measure it.
There are a lot of places where we could now simplify 'printf("%s",
strerror(errno))' into 'printf("%m")', but I'm not in any big hurry
to make that happen.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
elog.c has long had a private strerror wrapper that handles assorted
possible failures or deficiencies of the platform's strerror. On Windows,
it also knows how to translate Winsock error codes, which the native
strerror does not. Move all this code into src/port/strerror.c and
define strerror() as a macro that invokes it, so that both our frontend
and backend code will have all of this behavior.
I believe this constitutes an actual bug fix on Windows, since AFAICS
our frontend code did not report Winsock error codes properly before this.
However, the main point is to lay the groundwork for implementing %m
in src/port/snprintf.c: the behavior we want %m to have is this one,
not the native strerror's.
Note that this throws away the prior use of src/port/strerror.c,
which was to implement strerror() on platforms lacking it. That's
been dead code for nigh twenty years now, since strerror() was
already required by C89.
We should likewise cause strerror_r to use this behavior, but
I'll tackle that separately.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
| |
While we are probably still far away from fully implementing
assertions, all patch proposals appear to take issue with the existing
dummy grammar CREATE/DROP ASSERTION productions, so update those a
little bit. Rename the rule, use any_name instead of name, and remove
some unused code. Also remove the production for DROP ASSERTION,
since that would most likely be handled via the generic DROP support.
extracted from a patch by Joe Wildish
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to the source history, the internal format of line data type
has changed, but various functions working with it did were not updated
and thus were producing wrong results.
This patch addresses various such issues, in particular:
* Reject invalid specification A=B=0 on receive
* Reject same points on line_construct_pp()
* Fix perpendicular operator when negative values are involved
* Avoid division by zero on perpendicular operator
* Fix intersection and distance operators when neither A nor B are 1
* Return NULL for closest point when objects are parallel
* Check whether closest point of line segments is the intersection point
* Fix closest point of line segments being on the wrong segment
Aside from handling those issues, the patch also aims to make operators
more symmetric and less sen to precision loss. The EPSILON interferes
with even minor changes, but the least we can do is applying it to both
sides of the operators equally.
Author: Emre Hasegeli
Reviewed-by: Tomas Vondra
Discussion: https://www.postgresql.org/message-id/CAE2gYzxF7-5djV6-cEvqQu-fNsnt%3DEqbOURx7ZDg%2BVv6ZMTWbg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The activation and deactivation of commit timestamp tracking has not
been handled consistently for a primary or standbys at recovery. The
facility can be activated at three different moments of recovery:
- The beginning, where a primary would use the GUC value for the
decision-making, and where a standby relies on the contents of the
control file.
- When replaying a XLOG_PARAMETER_CHANGE record at redo.
- The end, where both primary and standby rely on the GUC value.
Using the GUC value for a primary at the beginning of recovery causes
problems with commit timestamp access when doing crash recovery.
Particularly, when replaying transaction commits, it could be possible
that an attempt to read commit timestamps is done for a transaction
which committed at a moment when track_commit_timestamp was disabled.
A test case is added to reproduce the failure. The test works down to
v11 as it takes advantage of transaction commits within procedures.
Reported-by: Hailong Li
Author: Masahiko Sawasa, Michael Paquier
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/11224478-a782-203b-1f17-e4797b39bdf0@qunar.com
Backpatch-through: 9.5, where commit timestamps have been introduced.
|
|
|
|
|
|
|
|
|
|
|
| |
TupleDescGetSlot() was kept around for backward compatibility for
user-written SRFs. With the TupleTableSlot abstraction work, that code
will need to be version specific anyway, so there's no point in
keeping the function around any longer.
Author: Ashutosh Bapat
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Upcoming changes introduce further types of tuple table slots, in
preparation of making table storage pluggable. New storage methods
will have different representation of tuples, therefore the slot
accessor should refer explicitly to heap tuples.
Instead of just renaming the functions, split it into one function
that accepts heap tuples not residing in buffers, and one accepting
ones in buffers. Previously one function was used for both, but that
was a bit awkward already, and splitting will allow us to represent
slot types for tuples in buffers and normal memory separately.
This is split out from the patch introducing abstract slots, as this
largely consists out of mechanical changes.
Author: Ashutosh Bapat
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
|
|
|
|
|
|
|
|
| |
That section is never in sync with the actual routines available and
their functionality.
Author: Ashutosh Bapat
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
| |
Previously it was an int / 4 bytes. The maximum number of attributes
in a tuple is restricted by the maximum value Var->varattno, which is
an AttrNumber/int16. Hence use the same data type for
TupleTableSlot->tts_nvalid.
Author: Ashutosh Bapat
Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
|
|
|
|
|
| |
The documented shortcoming was actually fixed in 4c728f3829
so the comment is not true anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, when using parallel query, EXPLAIN (ANALYZE)'s JIT
compilation timings did not include the overhead from doing so on the
workers. Fix that.
We do so by simply aggregating the cost of doing JIT compilation on
workers and the leader together. Arguably that's not quite accurate,
because the total time spend doing so is spent in parallel - but it's
hard to do much better. For additional detail, when VERBOSE is
specified, the stats for workers are displayed separately.
Author: Amit Khandekar and Andres Freund
Discussion: https://postgr.es/m/CAJ3gD9eLrz51RK_gTkod+71iDcjpB_N8eC6vU2AW-VicsAERpQ@mail.gmail.com
Backpatch: 11-
|
|
|
|
|
|
| |
Author: Mark G
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAEeOP_Zy_FvVwcAU0UX9nkOhnoR5KN%3D0B6LWX_kv0ZuSc4wbGw%40mail.gmail.com
|