| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is useful infrastructure for an upcoming proposed patch to
allow the WAL segment size to be changed at initdb time; tools like
pg_basebackup need the ability to interrogate the server setting.
But it also doesn't seem like a bad thing to have independently of
that; it may find other uses in the future.
Robert Haas and Beena Emerson. (The original patch here was by
Beena, but I rewrote it to such a degree that most of the code
being committed here is mine.)
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If you create a DestReciver of type DestRemote and try to use it from
a replication connection that is not bound to a specific daabase, or
any other hypothetical type of backend that is not bound to a specific
database, it will fail because it doesn't have a pg_proc catalog to
look up properties of the types being printed. In general, that's
an unavoidable problem, but we can hardwire the properties of a few
builtin types in order to support utility commands. This new
DestReceiver of type DestRemoteSimple does just that.
Patch by me, reviewed by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
|
|
|
|
|
|
|
| |
This patch doesn't actually make any index AM parallel-aware, but it
provides the necessary functions at the AM layer to do so.
Rahila Syed, Amit Kapila, Robert Haas
|
|
|
|
|
|
|
|
| |
Previously, ExecInitModifyTable was missing handling for WITH CHECK
OPTION, and view_query_is_auto_updatable was missing handling for
RELKIND_PARTITIONED_TABLE.
Amit Langote, reviewed by me.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In 2ac3ef7a01df859c62d0a02333b646d65eaec5ff, we changed things so that
it's possible for a different TupleTableSlot to be used for partitioned
tables at successively lower levels. If we do end up changing the slot
from the original, we must update ecxt_scantuple to point to the new one
for partition key of the tuple to be computed correctly.
Reported by Rajkumar Raghuwanshi. Patch by Amit Langote.
Discussion: http://postgr.es/m/CAKcux6%3Dm1qyqB2k6cjniuMMrYXb75O-MB4qGQMu8zg-iGGLjDw%40mail.gmail.com
|
|
|
|
|
|
| |
We've accumulated quite a bit of stuff with which pgindent is not
quite happy in this code; clean it up to provide a less-annoying base
for future pgindent runs.
|
|
|
|
|
| |
This was intended to be included in the previous commit,
but I goofed.
|
|
|
|
|
|
|
|
| |
The code here previously tried to call the partitioning operator, but
really the right thing to do (and the safe thing to do) is use
datumIsEqual().
Amit Langote, but I expanded the comment and fixed a compiler warning.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When copying into a partitioned table, the target heap may change from
one tuple to next. We must ask ReadBufferBI() to get a new buffer
every time such change occurs. To do that, use new function
ReleaseBulkInsertStatePin(). This fixes the bug that tuples ended up
being inserted into the wrong partition, which occurred exactly
because the wrong buffer was used.
Amit Langote, per a suggestion from Robert Haas. Some cosmetic
adjustments by me.
Reports by 高增琦 (Gao Zengqi), Venkata B Nagothi, and
Ragnar Ouchterlony.
Discussion: http://postgr.es/m/CAFmBtr32FDOqofo8yG-4mjzL1HnYHxXK5S9OGFJ%3D%3DcJpgEW4vA%40mail.gmail.com
Discussion: http://postgr.es/m/CAEyp7J9WiX0L3DoiNcRrY-9iyw%3DqP%2Bj%3DDLsAnNFF1xT2J1ggfQ%40mail.gmail.com
Discussion: http://postgr.es/m/16d73804-c9cd-14c5-463e-5caad563ff77%40agama.tv
Discussion: http://postgr.es/m/CA+TgmoaiZpDVUUN8LZ4jv1qFE_QyR+H9ec+79f5vNczYarg5Zg@mail.gmail.com
|
|
|
|
| |
Petr Jelinek
|
|
|
|
|
|
|
|
| |
The interface for the function was changed in
d72731a70450b5e7084991b9caa15cb58a2820df but the comments of the
function was not updated.
Patch by Yugo Nagata.
|
|
|
|
|
|
|
| |
The make rules needed further refinement so that we don't run multiple
generations per build.
reported by Tom Lane
|
|
|
|
|
|
|
|
|
|
|
| |
For some reason that is lost in history, a descending sequence would
default its minimum value to -2^63+1 (-PG_INT64_MAX) instead of
-2^63 (PG_INT64_MIN), even though explicitly specifying a minimum value
of -2^63 would work. Fix this inconsistency by using the full range by
default.
Reported-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
|
|
|
|
|
|
| |
initdb used to warn about that, but it was changed to an error in
pg_import_system_locales, but some build farm members failed because of
that. Change it back to a warning.
|
|
|
|
| |
From: Petr Jelinek <pjmodos@pjmodos.net>
|
|
|
|
|
|
| |
get_object_address_publication_rel() needed to check *relation, not
relation. Rename the variables to match style used nearby to avoid the
confusion.
|
|
|
|
|
|
| |
The results of the libpq functions PQescapeIdentifier() and
PQescapeLiteral() must be freed explicitly. Also handle errors in these
functions better.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Vacuum truncation scan can be sped up on rotating media by prefetching
blocks in forward direction. That makes the blocks already present in
memory by the time they are needed, while also letting OS read-ahead
kick in.
The truncate scan has been measured to be five times faster than without
this patch (that was on a slow disk, but it shouldn't hurt on fast
disks.)
Author: Álvaro Herrera, loosely based on a submission by Claudio Freire
Discussion: https://postgr.es/m/CAGTBQpa6NFGO_6g_y_7zQx8L9GcHDSQKYdo1tGuh791z6PYgEg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
Joining three tables only takes two join nodes. I think when I (tgl)
wrote this, I was envisioning possible additional joins; but since the
example doesn't show any fourth table, it's just confusing to write
a third join node.
Etsuro Fujita
Discussion: https://postgr.es/m/e6cfbaa3-af02-1abc-c25e-8fa5c6bc4e21@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
| |
Project style is to put things in this order, for the good and sufficient
reason that you often need the typedefs in the function declarations.
There already was one function declaration that needed a typedef, which
was randomly placed away from all the other static function declarations
in consequence. And the submitted patch for better json_populate_record
functionality jumped through even more hoops in order to preserve this
bad idea.
This patch only moves lines from point A to point B, no other changes.
|
|
|
|
|
|
|
|
|
| |
Coverity complained quite properly that commit ea15e1867 had introduced
unreachable code into ExecGather(); to wit, it was no longer possible to
iterate the final for-loop more or less than once. So remove the for().
In passing, clean up a couple of comments, and make better use of a local
variable.
|
| |
|
|
|
|
| |
This avoids that builtins.h has to include additional header files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When (1) autovacuum = off and (2) there's at least one database with
an XID age greater than autovacuum_freeze_max_age and (3) all tables
in that database that need vacuuming are already being processed by a
worker and (4) the autovacuum launcher is started, a kind of infinite
loop occurs. The launcher starts a worker and immediately exits. The
worker, finding no worker to do, immediately starts the launcher,
supposedly so that the next database can be processed. But because
datfrozenxid for that database hasn't been advanced yet, the new
worker gets put right back into the same database as the old one,
where it once again starts the launcher and exits. High-speed ping
pong ensues.
There are several possible ways to break the cycle; this seems like
the safest one.
Amit Khandekar (code) and Robert Haas (comments), reviewed by
Álvaro Herrera.
Discussion: http://postgr.es/m/CAJ3gD9eWejf72HKquKSzax0r+epS=nAbQKNnykkMA0E8c+rMDg@mail.gmail.com
|
|
|
|
|
|
|
|
| |
If either bound is infinite, then we shouldn't even try to perform a
comparison of the values themselves. Rearrange the logic so that
we don't.
Per buildfarm member skink and Tom Lane.
|
|
|
|
|
|
|
|
|
| |
This was forgotten in 665d1fad99e7b11678b0d5fa24d2898424243cd6 and
caused the whole buildfarm to become red for a little while.
Author: Petr Jelínek
Also fix a typo in a nearby error message.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pgoutput evidently needs to be built without -DBUILDING_DLL. (It seems
like a pretty bad idea that these makefiles need to know exactly where
all the shlibs are in the tree, or maybe what's bad is putting them under
src/backend/. But right now is not the time to redesign that.)
Also, remove "override CPPFLAGS" in pgoutput's Makefile. I don't think
that that actually has any bad consequences, but it's certainly useless
in a directory that has no .h files, and it might be contributing to the
failure somehow.
Per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add PUBLICATION catalogs and DDL
- Add SUBSCRIPTION catalog and DDL
- Define logical replication protocol and output plugin
- Add logical replication workers
From: Petr Jelinek <petr@2ndquadrant.com>
Reviewed-by: Steve Singer <steve@ssinger.info>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Erik Rijkers <er@xs4all.nl>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Brown-paper-bag bug in commit ab1f0c822: the old code here coped with
null CachedPlanSource.raw_parse_tree, the new code not so much.
Per report from Dave Cramer.
No regression test, because our core testing infrastructure doesn't
provide any easy way to exercise this path. Fortunately, the JDBC
crew test it regularly.
Discussion: https://postgr.es/m/CADK3HH+Ug3xCysKqw_dZOnaNnytZ1Rh5yP05hjO-e4NoyRxVvA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'd somehow talked myself into believing that set_append_rel_size
doesn't need to worry about getting back an AND clause when it applies
eval_const_expressions to the result of adjust_appendrel_attrs (that is,
transposing the appendrel parent's restriction clauses for one child).
But that is nonsense, and Andreas Seltenreich's fuzz tester soon
turned up a counterexample. Put back the make_ands_implicit step
that was there before, and add a regression test covering the case.
Report: https://postgr.es/m/878tq6vja6.fsf@ansel.ydns.eu
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since 69f4b9c plain expression evaluation (and thus normal projection)
can't return sets of tuples anymore. Thus remove code dealing with
that possibility.
This will require adjustments in external code using
ExecEvalExpr()/ExecProject() - that should neither be hard nor very
common.
Author: Andres Freund and Tom Lane
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a user requests the commit timestamp for a transaction old enough
that its data is concurrently being truncated away by vacuum at just the
right time, they would receive an ugly internal file-not-found error
message from slru.c rather than the expected NULL return value.
In a primary server, the window for the race is very small: the lookup
has to occur exactly between the two calls by vacuum, and there's not a
lot that happens between them (mostly just a multixact truncate). In a
standby server, however, the window is larger because the truncation is
executed as soon as the WAL record for it is replayed, but the advance
of the oldest-Xid is not executed until the next checkpoint record.
To fix in the primary, simply reverse the order of operations in
vac_truncate_clog. To fix in the standby, augment the WAL truncation
record so that the standby is aware of the new oldest-XID value and can
apply the update immediately. WAL version bumped because of this.
No backpatch, because of the low importance of the bug and its rarity.
Author: Craig Ringer
Reviewed-By: Petr Jelínek, Peter Eisentraut
Discussion: https://postgr.es/m/CAMsr+YFhVtRQT1VAwC+WGbbxZZRzNou=N9Ed-FrCqkwQ8H8oJQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
Code to map attribute numbers in map_partition_varattnos() duplicates
what convert_tuples_by_name_map() does. Avoid that.
Amit Langote, per a report from Álvaro Herrera.
Discussion: http://postgr.es/m/9ce97382-54c8-deb3-9ee9-a2ec271d866b%40lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Account for the fact that the highest bound less than or equal to the
upper bound might be either the lower or the upper bound of the
overlapping partition, depending on whether the proposed partition
completely contains the existing partition or merely overlaps it.
Also, we need not continue searching for even greater bound in
partition_bound_bsearch() once we find the first bound that is *equal*
to the probe, because we don't have duplicate datums. That spends
cycles needlessly.
Amit Langote, per a report from Amul Sul. Cosmetic changes by me.
Discussion: http://postgr.es/m/CAAJ_b94XgbqVoXMyxxs63CaqWoMS1o2gpHiU0F7yGnJBnvDc_A%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
In ExecInsert(), do not switch back to the root partitioned table
ResultRelInfo until after we finish ExecProcessReturning(), so that
RETURNING projection is done using the partition's descriptor. For
the projection to work correctly, we must initialize the same for each
leaf partition during ModifyTableState initialization.
Amit Langote
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a tuple is inherited into a partitioning root, no partition
constraints need to be enforced; when it is inserted into a leaf, the
parent's partitioning quals needed to be enforced. The previous
coding got both of those cases right. When a tuple is inserted into
an intermediate level of the partitioning hierarchy (i.e. a table
which is both a partition itself and in turn partitioned), it must
enforce the partitioning qual inherited from its parent. That case
got overlooked; repair.
Amit Langote
|
|
|
|
|
|
|
|
|
|
| |
There doesn't seem to be any reason not to allow negative years to be
interpreted as BC, so do that.
The documentation is pretty vague on the details of this function, so
nothing needs to change there.
Reported-by: Andy Abelisto, in bug #14446
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Evaluation of set returning functions (SRFs_ in the targetlist (like SELECT
generate_series(1,5)) so far was done in the expression evaluation (i.e.
ExecEvalExpr()) and projection (i.e. ExecProject/ExecTargetList) code.
This meant that most executor nodes performing projection, and most
expression evaluation functions, had to deal with the possibility that an
evaluated expression could return a set of return values.
That's bad because it leads to repeated code in a lot of places. It also,
and that's my (Andres's) motivation, made it a lot harder to implement a
more efficient way of doing expression evaluation.
To fix this, introduce a new executor node (ProjectSet) that can evaluate
targetlists containing one or more SRFs. To avoid the complexity of the old
way of handling nested expressions returning sets (e.g. having to pass up
ExprDoneCond, and dealing with arguments to functions returning sets etc.),
those SRFs can only be at the top level of the node's targetlist. The
planner makes sure (via split_pathtarget_at_srfs()) that SRF evaluation is
only necessary in ProjectSet nodes and that SRFs are only present at the
top level of the node's targetlist. If there are nested SRFs the planner
creates multiple stacked ProjectSet nodes. The ProjectSet nodes always get
input from an underlying node.
We also discussed and prototyped evaluating targetlist SRFs using ROWS
FROM(), but that turned out to be more complicated than we'd hoped.
While moving SRF evaluation to ProjectSet would allow to retain the old
"least common multiple" behavior when multiple SRFs are present in one
targetlist (i.e. continue returning rows until all SRFs are at the end of
their input at the same time), we decided to instead only return rows till
all SRFs are exhausted, returning NULL for already exhausted ones. We
deemed the previous behavior to be too confusing, unexpected and actually
not particularly useful.
As a side effect, the previously prohibited case of multiple set returning
arguments to a function, is now allowed. Not because it's particularly
desirable, but because it ends up working and there seems to be no argument
for adding code to prohibit it.
Currently the behavior for COALESCE and CASE containing SRFs has changed,
returning multiple rows from the expression, even when the SRF containing
"arm" of the expression is not evaluated. That's because the SRFs are
evaluated in a separate ProjectSet node. As that's quite confusing, we're
likely to instead prohibit SRFs in those places. But that's still being
discussed, and the code would reside in places not touched here, so that's
a task for later.
There's a lot of, now superfluous, code dealing with set return expressions
around. But as the changes to get rid of those are verbose largely boring,
it seems better for readability to keep the cleanup as a separate commit.
Author: Tom Lane and Andres Freund
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
|
|
|
|
| |
Typo fix from Mithun Cy; other improvements by me.
|
|
|
|
|
|
|
|
|
| |
This makes it possible to delete multiple keys from a jsonb value by
passing in an array of text values, which makes the operaiton much
faster than individually deleting the keys (which would require copying
the jsonb structure over and over again.
Reviewed by Dmitry Dolgov and Michael Paquier
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These resulted in wrong answers if the relabeled argument could be matched
to an index column, as shown in bug #14504 from Evgeniy Kozlov. We might
be able to resurrect these optimizations by adjusting the planner's
treatment of RelabelType, or by adjusting btree's rules for selecting
comparison functions, but either solution will take careful analysis
and does not sound like a fit candidate for backpatching.
I left the catalog infrastructure in place and just reduced the transform
functions to always-return-NULL. This would be necessary anyway in the
back branches, and it doesn't seem important to be more invasive in HEAD.
Bug introduced by commit b8a18ad48. Back-patch to 9.5 where that came in.
Report: https://postgr.es/m/20170118144828.1432.52823@wrigleys.postgresql.org
Discussion: https://postgr.es/m/18771.1484759439@sss.pgh.pa.us
|
|
|
|
|
|
|
|
| |
The original table partitioning patch overlooked this.
Discussion: http://postgr.es/m/CAG1_KcDJiZB=L6yOUO_bVufj2q2851_xdkfhw0JdcD_2VtKssw@mail.gmail.com
Keith Fiske and Amit Langote, adjusted by me.
|
|
|
|
|
|
|
|
|
| |
This avoids additional translatable strings for each distinct type, as
well as making our quoting style around type names more consistent
(namely, that we don't quote type names). This continues what started
as f402b9950120.
Discussion: https://postgr.es/m/20160401170642.GA57509@alvherre.pgsql
|
|
|
|
|
|
|
| |
Forthcoming patches to allow other types of parallel scans will
need this logic, or something like it.
Dilip Kumar
|
|
|
|
|
|
|
|
|
|
|
|
| |
This resulted in failures depending on the order of "locale -a" output.
The original coding in initdb sorted the results, but that should be
unnecessary as long as "locale -a" doesn't print duplicate names. The
original entries will then all be non-dups, and while we might generate
duplicate aliases by stripping, they should be for different encodings and
thus not conflict. Even if the latter assumption fails somehow, it won't
be fatal because we're using if_not_exists mode for the aliases.
Discussion: https://postgr.es/m/26116.1484751196%40sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In an RLS query, we must ensure that security filter quals are evaluated
before ordinary query quals, in case the latter contain "leaky" functions
that could expose the contents of sensitive rows. The original
implementation of RLS planning ensured this by pushing the scan of a
secured table into a sub-query that it marked as a security-barrier view.
Unfortunately this results in very inefficient plans in many cases, because
the sub-query cannot be flattened and gets planned independently of the
rest of the query.
To fix, drop the use of sub-queries to enforce RLS qual order, and instead
mark each qual (RestrictInfo) with a security_level field establishing its
priority for evaluation. Quals must be evaluated in security_level order,
except that "leakproof" quals can be allowed to go ahead of quals of lower
security_level, if it's helpful to do so. This has to be enforced within
the ordering of any one list of quals to be evaluated at a table scan node,
and we also have to ensure that quals are not chosen for early evaluation
(i.e., use as an index qual or TID scan qual) if they're not allowed to go
ahead of other quals at the scan node.
This is sufficient to fix the problem for RLS quals, since we only support
RLS policies on simple tables and thus RLS quals will always exist at the
table scan level only. Eventually these qual ordering rules should be
enforced for join quals as well, which would permit improving planning for
explicit security-barrier views; but that's a task for another patch.
Note that FDWs would need to be aware of these rules --- and not, for
example, send an insecure qual for remote execution --- but since we do
not yet allow RLS policies on foreign tables, the case doesn't arise.
This will need to be addressed before we can allow such policies.
Patch by me, reviewed by Stephen Frost and Dean Rasheed.
Discussion: https://postgr.es/m/8185.1477432701@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
Move this logic out of initdb into a user-callable function. This
simplifies the code and makes it possible to update the standard
collations later on if additional operating system collations appear.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Euler Taveira <euler@timbira.com.br>
|
|
|
|
|
|
|
|
| |
The bootstrap scanner/parser contains code to parse floating point
values, but this is not exercised anywhere, so remove it.
Reviewed-by: Jim Nasby
Discussion: https://postgr.es/m/20170110051119.b5h7i3z5qagy35rb@alvherre.pgsql
|
| |
|
| |
|