| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
Commit 85c11324cabaddcfaf3347df78555b30d27c5b5a renamed pg_resetxlog
to pg_resetwal, but didn't make pg_upgrade smart enough to cope with
the situation.
Michael Paquier, per a complaint from Jeff Janes
|
|
|
|
|
|
| |
The core of the functionality was already implemented when
pg_import_system_collations was added. This just exposes it as an
option in the SQL command.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This isn't exposed to the optimizer or the executor yet; we'll add
support for those things in a separate patch. But this puts the
basic mechanism in place: several processes can attach to a parallel
btree index scan, and each one will get a subset of the tuples that
would have been produced by a non-parallel scan. Each index page
becomes the responsibility of a single worker, which then returns
all of the TIDs on that page.
Rahila Syed, Amit Kapila, Robert Haas, reviewed and tested by
Anastasia Lubennikova, Tushar Ahuja, and Haribabu Kommi.
|
|
|
|
| |
Higuchi Daisuke
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This doesn't do anything to make Param nodes anything other than
parallel-restricted, so this only helps with uncorrelated subplans,
and it's not necessarily very cheap because each worker will run the
subplan separately (just as a Hash Join will build a separate copy of
the hash table in each participating process), but it's a first step
toward supporting cases that are more likely to help in practice, and
is occasionally useful on its own.
Amit Kapila, reviewed and tested by Rafia Sabih, Dilip Kumar, and
me.
Discussion: http://postgr.es/m/CAA4eK1+e8Z45D2n+rnDMDYsVEb5iW7jqaCH_tvPMYau=1Rru9w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The xlog-specific headers need to be included in both frontend code -
specifically, pg_waldump - and the backend, but the remainder of the
private headers for each index are only needed by the backend. By
splitting the xlog stuff out into separate headers, pg_waldump pulls
in fewer backend headers, which is a good thing.
Patch by me, reviewed by Michael Paquier and Andres Freund, per a
complaint from Dilip Kumar.
Discussion: http://postgr.es/m/CA+TgmoZ=F=GkxV0YEv-A8tb+AEGy_Qa7GSiJ8deBKFATnzfEug@mail.gmail.com
|
|
|
|
|
|
| |
Michael Paquier, reviewed and slightly revised by me.
Discussion: http://postgr.es/m/CAB7nPqRzCQb=vdfHvMtP0HMLBHU6z1aGdo4GJsUP-HP8jx+Pkw@mail.gmail.com
|
|
|
|
|
|
|
| |
Range partitioning doesn't support nulls in the partitioning columns,
but list partitioning does.
Amit Langote, per a complaint from Amul Sul
|
|
|
|
|
|
|
| |
I noticed while hacking on join UNION transforms that planner.c's
function get_base_rel_indexes() just duplicates the functionality of
get_relids_in_jointree(). It doesn't even have the excuse of being
older code :-(. Drop it and use the latter function instead.
|
|
|
|
|
| |
Commit 62e8b38 renamed "--xlog-method" option for pg_basebackup to
"--wal-method", but forgot to update the error message mentioning that option.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This module was intended to ease migrations of applications that used
the pre-8.3 version of text search to the in-core version introduced
in that release. However, since all pre-8.3 releases of the database
have been out of support for more than 5 years at this point, we
expect that few people are depending on it at this point. If some
people still need it, nothing prevents it from being maintained as a
separate extension, outside of core.
Discussion: http://postgr.es/m/CA+Tgmob5R8aDHiFRTQsSJbT1oreKg2FOSBrC=2f4tqEH3dOMAg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
The ALTER TABLE ALTER TYPE implementation can issue DROP INDEX and
CREATE INDEX to refit existing indexes for the new column type. Since
this CREATE INDEX is an implementation detail of an index alteration,
the ensuing DefineIndex() should skip ACL checks specific to index
creation. It already skips the namespace ACL check. Make it skip the
tablespace ACL check, too. Back-patch to 9.2 (all supported versions).
Reviewed by Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This stores a data type, required to be an integer type, with the
sequence. The sequences min and max values default to the range
supported by the type, and they cannot be set to values exceeding that
range. The internal implementation of the sequence is not affected.
Change the serial types to create sequences of the appropriate type.
This makes sure that the min and max values of the sequence for a serial
column match the range of values supported by the table column. So the
sequence can no longer overflow the table column.
This also makes monitoring for sequence exhaustion/wraparound easier,
which currently requires various contortions to cross-reference the
sequences with the table columns they are used with.
This commit also effectively reverts the pg_sequence column reordering
in f3b421da5f4addc95812b9db05a24972b8fd9739, because the new seqtypid
column allows us to fill the hole in the struct and create a more
natural overall column ordering.
Reviewed-by: Steve Singer <steve@ssinger.info>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
|
|
|
|
|
| |
initdb and pg_basebackup now have a --waldir option rather --xlogdir,
and pg_basebackup now has --wal-method rather than --xlog-method.
|
|
|
|
| |
xlog-switch becomes wal-switch, and xlog-insert becomes wal-insert.
|
|
|
|
|
| |
This means pg_receivexlog because pg_receivewal, pg_resetxlog
becomes pg_resetwal, and pg_xlogdump becomes pg_waldump.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The S/390 members of the buildfarm are showing failures indicating
that they're having trouble with the rint() calls I added yesterday.
There's no good reason for that, and I wonder if it is a compiler bug
similar to the one we worked around in d9476b838. Try to fix it using
the same method as before, namely to store the result of rint() back
into a "double" variable rather than immediately converting to int64.
(This isn't entirely waving a dead chicken, since on machines with
wider-than-double float registers, the extra store forces a width
conversion. I don't know if S/390 is like that, but it seems worth
trying.)
In passing, merge duplicate ereport() calls in float8_timestamptz().
Per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit f82ec32ac30ae7e3ec7c84067192535b2ff8ec0e renamed the pg_xlog
directory to pg_wal. To make things consistent, and because "xlog" is
terrible terminology for either "transaction log" or "write-ahead log"
rename all SQL-callable functions that contain "xlog" in the name to
instead contain "wal". (Note that this may pose an upgrade hazard for
some users.)
Similarly, rename the xlog_position argument of the functions that
create slots to be called wal_position.
Discussion: https://www.postgresql.org/message-id/CA+Tgmob=YmA=H3DbW1YuOXnFVgBheRmyDkWcD9M8f=5bGWYEoQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Even if we don't emit definitions for SH_ALLOCATE and SH_FREE, we
still need prototypes. The user can't define them before including
simplehash.h because SH_TYPE isn't available yet.
For the allocator to be able to access private_data, it needs to
become an argument to SH_CREATE. Previously we relied on callers
to set that after returning from SH_CREATE, but SH_CREATE calls
SH_ALLOCATE before returning.
Dilip Kumar, reviewed by me.
|
|
|
|
| |
Thomas Munro
|
|
|
|
|
|
|
|
|
|
|
|
| |
It did that to verify that the page was an overflow page rather than
anything else, but that means that checking the status of all the
overflow bits requires reading the entire index. So don't do that.
The new code validates that the page is not a primary bucket page
or bitmap page by looking at the metapage, so that using this on
large numbers of pages can be reasonably efficient.
Ashutosh Sharma, per a complaint from me, and with further
modifications by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's always been possible for index AMs to cache data across successive
amgettuple calls within a single SQL command: the IndexScanDesc.opaque
field is meant for precisely that. However, no comparable facility
exists for amortizing setup work across successive aminsert calls.
This patch adds such a feature and teaches GIN, GIST, and BRIN to use it
to amortize catalog lookups they'd previously been doing on every call.
(The other standard index AMs keep everything they need in the relcache,
so there's little to improve there.)
For GIN, the overall improvement in a statement that inserts many rows
can be as much as 10%, though it seems a bit less for the other two.
In addition, this makes a really significant difference in runtime
for CLOBBER_CACHE_ALWAYS tests, since in those builds the repeated
catalog lookups are vastly more expensive.
The reason this has been hard up to now is that the aminsert function is
not passed any useful place to cache per-statement data. What I chose to
do is to add suitable fields to struct IndexInfo and pass that to aminsert.
That's not widening the index AM API very much because IndexInfo is already
within the ken of ambuild; in fact, by passing the same info to aminsert
as to ambuild, this is really removing an inconsistency in the AM API.
Discussion: https://postgr.es/m/27568.1486508680@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
| |
A proposed patch, also by Thomas and in the same thread, would change
the output order of these. Independent of the follow-up patches
getting committed, nailing down the order in these specific tests at
worst seems harmless.
Author: Thomas Munro
Discussion: https://postgr.es/m/CAEepm=1D4-tP7j7UAgT_j4ZX2j4Ehe1qgZQWFKBMb8F76UW5Rg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When converting a float value to integer microseconds, we should be careful
to round the value to the nearest integer, typically with rint(); simply
assigning to an int64 variable will truncate, causing apparently off-by-one
values in cases that should work. Most places in the datetime code got
this right, but not these two.
float8_timestamptz() is new as of commit e511d878f (9.6). Previous
versions effectively depended on interval_mul() to do roundoff correctly,
which it does, so this fixes an accuracy regression in 9.6.
The problem in make_interval() dates to its introduction in 9.4. Aside
from being careful to round not truncate, let's incorporate the hours and
minutes inputs into the result with exact integer arithmetic, rather than
risk introducing roundoff error where there need not have been any.
float8_timestamptz() problem reported by Erik Nordström, though this is
not his proposed patch. make_interval() problem found by me.
Discussion: https://postgr.es/m/CAHuQZDS76jTYk3LydPbKpNfw9KbACmD=49dC4BrzHcfPv6yA1A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the new GUC wal_consistency_checking is set to a non-empty value,
it triggers recording of additional full-page images, which are
compared on the standby against the results of applying the WAL record
(without regard to those full-page images). Allowable differences
such as hints are masked out, and the resulting pages are compared;
any difference results in a FATAL error on the standby.
Kuntal Ghosh, based on earlier patches by Michael Paquier and Heikki
Linnakangas. Extensively reviewed and revised by Michael Paquier and
by me, with additional reviews and comments from Amit Kapila, Álvaro
Herrera, Simon Riggs, and Peter Eisentraut.
|
| |
|
|
|
|
|
|
|
| |
This method is more elegant and more efficient.
Per a suggestion from Andres Freund, who also briefly reviewed
the patch.
|
|
|
|
|
|
|
|
|
|
|
| |
In the large DO block, collect row TIDs into array variables instead of
creating and dropping a pile of temporary tables. In a normal build,
this reduces the brin test script's runtime from about 1.1 sec to 0.4 sec
on my workstation. That's not all that exciting perhaps, but in a
CLOBBER_CACHE_ALWAYS test build, the runtime drops from 20 min to 17 min,
which is a little more useful. In combination with some other changes
I plan to propose, this will help provide a noticeable reduction in
cycle time for CLOBBER_CACHE_ALWAYS buildfarm critters.
|
|
|
|
|
|
| |
There's no generic guard against multiple inclusion in this file,
for good reason. But these typedefs need one, as per a report
from Jeff Janes.
|
|
|
|
|
|
|
|
| |
This is infrastructure for a pending patch to allow parallel bitmap
heap scans.
Dilip Kumar, reviewed (in earlier versions) by Andres Freund and
(more recently) by me. Some further renaming by me, also.
|
|
|
|
| |
Mithun Cy, per a report by Erik Rijkers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids a very significant amount of buffer manager traffic and
contention when scanning hash indexes, because it's no longer
necessary to lock and pin the metapage for every scan. We do need
some way of figuring out when the cache is too stale to use any more,
so that when we lock the primary bucket page to which the cached
metapage points us, we can tell whether a split has occurred since we
cached the metapage data. To do that, we use the hash_prevblkno field
in the primary bucket page, which would otherwise always be set to
InvalidBuffer.
This patch contains code so that it will continue working (although
less efficiently) with hash indexes built before this change, but
perhaps we should consider bumping the hash version and ripping out
the compatibility code. That decision can be made later, though.
Mithun Cy, reviewed by Jesper Pedersen, Amit Kapila, and by me.
Before committing, I made a number of cosmetic changes to the last
posted version of the patch, adjusted _hash_getcachedmetap to be more
careful about order of operation, and made some necessary updates to
the pageinspect documentation and regression tests.
|
|
|
|
|
|
|
|
|
|
| |
Before, reading pg_sequences.last_value would fail unless the user had
appropriate sequence permissions, which would make the pg_sequences view
cumbersome to use. Instead, return null instead of the real value when
there are no permissions.
From: Michael Paquier <michael.paquier@gmail.com>
Reported-by: Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem with the original coding here is that we might receive (and
clear) a relcache invalidation signal for the target relation down inside
one of the index_open calls we're doing. Since the target is open, we
would not drop the relcache entry, just reset its rd_indexvalid and
rd_indexlist fields. But RelationGetIndexAttrBitmap() kept going, and
would eventually cache and return potentially-obsolete attribute bitmaps.
The case where this matters is where the inval signal was from a CREATE
INDEX CONCURRENTLY telling us about a new index on a formerly-unindexed
column. (In all other cases, the lock we hold on the target rel should
prevent any concurrent change in index state.) Even just returning the
stale attribute bitmap is not such a problem, because it shouldn't matter
during the transaction in which we receive the signal. What hurts is
caching the stale data, because it can survive into later transactions,
breaking CREATE INDEX CONCURRENTLY's expectation that later transactions
will not create new broken HOT chains. The upshot is that there's a window
for building corrupted indexes during CREATE INDEX CONCURRENTLY.
This patch fixes the problem by rechecking that the set of index OIDs
is still the same at the end of RelationGetIndexAttrBitmap() as it was
at the start. If not, we loop back and try again. That's a little
more than is strictly necessary to fix the bug --- in principle, we
could return the stale data but not cache it --- but it seems like a
bad idea on general principles for relcache to return data it knows
is stale.
There might be more hazards of the same ilk, or there might be a better
way to fix this one, but this patch definitely improves matters and seems
unlikely to make anything worse. So let's push it into today's releases
even as we continue to study the problem.
Pavan Deolasee and myself
Discussion: https://postgr.es/m/CABOikdM2MUq9cyZJi1KyLmmkCereyGp5JQ4fuwKoyKEde_mzkQ@mail.gmail.com
|
|
|
|
|
|
| |
Commit 665d1fad9 introduced rd_pkindex, and made RelationGetIndexList
responsible for updating it, but didn't bother to fix
RelationGetIndexList's header comment to say so.
|
|
|
|
| |
Also improve the message style a bit while we're here.
|
|
|
|
| |
Masahiko Sawada
|
|
|
|
|
|
|
|
|
| |
Backpatch to all supported versions, where applicable, to make backpatching
of future fixes go more smoothly.
Josh Soref
Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
|
|
|
|
| |
Rafia Sabih
|
|
|
|
|
|
|
|
|
|
| |
When there's only one two-phase state file, there's also only one
long-running prepared transaction. Adjust the message text
accordingly.
Nikhil Sontakke
Discussion: http://postgr.es/m/CAMGcDxcmR_DWZXXndGoPzVQx=B17A5=RviEA1qNaF=FWLy5Whw@mail.gmail.com
|
|
|
|
|
|
|
|
| |
Previously DROP SUBSCRIPTION command forgot to release the lock at all.
Original patches by Kyotaro Horiguchi and Michael Paquier,
but I didn't use them.
Discussion: http://postgr.es/m/20170201.173623.66249355.horiguchi.kyotaro@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Modify FETCH_COUNT to always have a defined value, like other control
variables, mainly so it will always appear in "\set" output.
Add hooks to force HISTSIZE to be defined and require it to have an
integer value. (I don't see any point in allowing it to be set to
non-integral values.)
Add hooks to force IGNOREEOF to be defined and require it to have an
integer value. Unlike the other cases, here we're trying to be
bug-compatible with a rather bogus externally-defined behavior, so I think
we need to continue to allow "\set IGNOREEOF whatever". Fix it so that
the substitution hook silently replace non-numeric values with "10",
so that the stored value always reflects what we're really doing.
Add a dummy assign hook for HISTFILE, just so it's always in
variables.c's list. We can't require it to be defined always, because
that would break the interaction with the PSQL_HISTORY environment
variable, so there isn't any change in visible behavior here.
Remove tab-complete.c's private list of known variable names, since that's
really a maintenance nuisance. Given the preceding changes, there are no
control variables it won't show anyway. This does mean that if for some
reason you've unset one of the status variables (DBNAME, HOST, etc), that
variable would not appear in tab completion for \set. But I think that's
fine, for at least two reasons: we shouldn't be encouraging people to use
those variables as regular variables, and if someone does do so anyway,
why shouldn't it act just like a regular variable?
Remove ugly and no-longer-used-anywhere GetVariableNum(). In general,
future additions of integer-valued control variables should follow the
paradigm of adding an assign hook using ParseVariableNum(), so there's
no reason to expect we'd need this again later.
Discussion: https://postgr.es/m/17516.1485973973@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
| |
Coverity complained that we might pass a null pointer to strcmp()
if PQresultErrorField were to return NULL. That shouldn't be possible,
since the server is supposed to always provide some SQLSTATE or other
in an error message. But we usually defend against such hazards, and
it only takes a little more code to do so here.
There's no good reason to think this is a live bug, so no back-patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we forcibly place a Material node atop a finished subplan, we need
to move any initPlans attached to the subplan up to the Material node,
in order to keep SS_finalize_plan() happy. I'd figured this out in
commit 7b67a0a49 for the case of materializing a cursor plan, but out of
an abundance of caution, I put the initPlan movement hack at the call
site for that case, rather than inside materialize_finished_plan().
That was the wrong thing, because it turns out to also be necessary for
the only other caller of materialize_finished_plan(), ie subselect.c.
We lacked any test cases that exposed the mistake, but bug#14524 from
Wei Congrui shows that it's possible to get an initPlan reference into
the top tlist in that case too, and then SS_finalize_plan() complains.
Hence, move the hack into materialize_finished_plan().
In HEAD, also relocate some recently-added tests in subselect.sql, which
I'd unthinkingly dropped into the middle of a sequence of related tests.
Report: https://postgr.es/m/20170202060020.1400.89021@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Given a targetlist like "srf(x), f(srf(x))", split_pathtarget_at_srfs()
decided that it needed two levels of ProjectSet nodes, failing to notice
that the two SRF calls are textually equal(). Because of that, setrefs.c
would convert the upper ProjectSet's tlist to "Var1, f(Var1)" (where Var1
represents a reference to the srf(x) output of the lower ProjectSet).
This triggered an assertion in nodeProjectSet.c complaining that it found
no SRFs to evaluate, as reported by Erik Rijkers.
What we want in such a case is to evaluate srf(x) only once and use a plain
Result node to compute "Var1, f(Var1)"; that gives results similar to what
previous versions produced, whereas allowing srf(x) to be evaluated again
in an upper ProjectSet would square the number of rows emitted.
Furthermore, even if the SRF calls aren't textually identical, we want them
to be evaluated in lockstep, because that's what happened in the old
implementation. But split_pathtarget_at_srfs() got this completely wrong,
using two levels of ProjectSet for a case like "srf(x), f(srf(y))".
Hence, rewrite split_pathtarget_at_srfs() from the ground up so that it
groups SRFs according to the depth of nesting of SRFs in their arguments.
This is pretty much how we envisioned that working originally, but I blew
it when it came to implementation.
In passing, optimize the case of target == input_target, which I noticed
is not only possible but quite common.
Discussion: https://postgr.es/m/dcbd2853c05d22088766553d60dc78c6@xs4all.nl
|
|
|
|
|
|
|
|
|
|
|
| |
There is no particularly good reason to limit this value to 1000,
so increase the limit to INT_MAX / 2, the same limit we use for
shared_buffers. It's not clear how much practical effect larger
settings will have, but there seems no harm in letting people try it.
Jim Nasby, less a comment change I stripped out.
Discussion: http://postgr.es/m/f6e58a22-030b-eb8a-5457-f62fb08d701c@BlueTreble.com
|
|
|
|
|
|
|
|
|
| |
Patch by Jesper Pedersen and Ashutosh Sharma, with some error handling
improvements by me. Tests from Peter Eisentraut. Reviewed by Álvaro
Herrera, Michael Paquier, Jesper Pedersen, Jeff Janes, Peter
Eisentraut, Amit Kapila, Mithun Cy, and me.
Discussion: http://postgr.es/m/e2ac6c58-b93f-9dd9-f4e6-d6d30add7fdf@redhat.com
|
|
|
|
|
|
|
| |
These were left out by mistake back when support for KOI8-U encoding was
added.
Extracted from Kyotaro Horiguchi's larger patch.
|
|
|
|
|
|
| |
Not all compilers understand that the elog(ERROR) never returns.
David Rowley
|
|
|
|
|
|
|
|
|
|
| |
Doing so doesn't seem to be within the purpose of the per user
connection limits, and has particularly unfortunate effects in
conjunction with parallel queries.
Backpatch to 9.6 where parallel queries were introduced.
David Rowley, reviewed by Robert Haas and Albe Laurenz.
|