| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If SELECT FOR UPDATE NOWAIT tries to lock a tuple that is concurrently
being updated, it might fail to honor its NOWAIT specification and block
instead of raising an error.
Fix by adding a no-wait flag to EvalPlanQualFetch which it can pass down
to heap_lock_tuple; also use it in EvalPlanQualFetch itself to avoid
blocking while waiting for a concurrent transaction.
Authors: Craig Ringer and Thomas Munro, tweaked by Álvaro
http://www.postgresql.org/message-id/51FB6703.9090801@2ndquadrant.com
Per Thomas Munro in the course of his SKIP LOCKED feature submission,
who also provided one of the isolation test specs.
Backpatch to 9.4, because that's as far back as it applies without
conflicts (although the bug goes all the way back). To that branch also
backpatch Thomas Munro's new NOWAIT test cases, committed in master by
Heikki as commit 9ee16b49f0aac819bd4823d9b94485ef608b34e8 .
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases, not all Vars were being correctly marked as having been
modified for updatable security barrier views, which resulted in invalid
plans (eg: when security barrier views were created over top of
inheiritance structures).
In passing, be sure to update both varattno and varonattno, as _equalVar
won't consider the Vars identical otherwise. This isn't known to cause
any issues with updatable security barrier views, but was noticed as
missing while working on RLS and makes sense to get fixed.
Back-patch to 9.4 where updatable security barrier views were
introduced.
|
|
|
|
| |
Spotted by Peter Geoghegan.
|
|
|
|
|
|
|
|
|
|
|
| |
Use SECURITY_LOCAL_USERID_CHANGE while building temporary tables;
only escalate to SECURITY_RESTRICTED_OPERATION while potentially
running user-supplied code. The more secure mode was preventing
temp table creation. Add regression tests to cover this problem.
This fixes Bug #11208 reported by Bruno Emanuel de Andrade Silva.
Backpatch to 9.4, where the bug was introduced.
|
|
|
|
| |
Fabrízio de Royes Mello
|
|
|
|
|
|
|
|
|
|
| |
Prevent automatic oid assignment when in binary upgrade mode. Also
throw an error when contrib/pg_upgrade_support functions are called when
not in binary upgrade mode.
This prevent automatically-assigned oids from conflicting with later
pre-assigned oids coming from the old cluster. It also makes sure oids
are preserved in call important cases.
|
|
|
|
| |
Done for clarity
|
|
|
|
|
|
| |
Reverts commits 73d78e11a0f7183c80b93eefbbb6026fe9664015 and
b0488e5c4fbfdce8acc989bdc17d9f0ec09ac281. Also reverts pg_upgrade
changes.
|
|
|
|
|
|
|
| |
Also adjust pg_upgrade to not use this method for optional TOAST table
creation.
Patch by Fabrízio de Royes Mello
|
|
|
|
|
|
|
|
|
| |
There's no point in setting up a context error callback when doing
conditional lock acquisition, because we never actually wait and so the
user wouldn't be able to see the context message anywhere. In fact,
this is more in line with what ConditionalXactLockTableWait is doing.
Backpatch to 9.4, where this was added.
|
|
|
|
|
|
|
|
|
| |
The symbol was added by 71901ab6d; the original code was introduced by
6868ed749. Development of both overlapped which is why we apparently
failed to notice.
This is a (very slight) behavior change, so I'm not backpatching this to
9.4 for now, even though the symbol does exist there.
|
|
|
|
|
|
| |
Event triggers want to know the OID of the interesting object created,
which is the main type. The array created as part of the operation is
just a subsidiary object which is not of much interest.
|
|
|
|
|
|
| |
Other DDL commands are already returning the OID, which is required for
future additional event trigger work. This is merely making these
commands in line with the rest of utility command support.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Add a succint comment explaining why it's correct to change the
persistence in this way. Also s/loggedness/persistence/ because native
speakers didn't like the latter term.
Fabrízio and Álvaro
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CheckConstraintFetch() leaked a cstring in the caller's context for each
CHECK constraint expression it copied into the relcache. Ordinarily that
isn't problematic, but it can be during CLOBBER_CACHE testing because so
many reloads can happen during a single query; so complicate the code
slightly to allow freeing the cstring after use. Per testing on buildfarm
member barnacle.
This is exactly like the leak fixed in AttrDefaultFetch() by commit
078b2ed291c758e7125d72c3a235f128d40a232b. (Yes, this time I did look for
other instances of the same coding pattern :-(.) Like that patch, no
back-patch, since it seems unlikely that there's any problem except under
very artificial test conditions.
BTW, it strikes me that both of these places would require further work
comparable to commit ab8c84db2f7af008151b848cf1d6a4672a39eecd, if we ever
supported defaults or check constraints on system catalogs: they both
assume they are copying into an empty relcache data structure, and that
conceivably wouldn't be the case during recursive reloading of a system
catalog. This does not seem worth worrying about for the moment, since
there is no near-term prospect of supporting any such thing. So I'll
just note the possibility for the archives' sake.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This enables changing permanent (logged) tables to unlogged and
vice-versa.
(Docs for ALTER TABLE / SET TABLESPACE got shuffled in an order that
hopefully makes more sense than the original.)
Author: Fabrízio de Royes Mello
Reviewed by: Christoph Berg, Andres Freund, Thom Brown
Some tweaking by Álvaro Herrera
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cause the path extraction operators to return their lefthand input,
not NULL, if the path array has no elements. This seems more consistent
since the case ought to correspond to applying the simple extraction
operator (->) zero times.
Cause other corner cases in field/element/path extraction to return NULL
rather than failing. This behavior is arguably more useful than throwing
an error, since it allows an expression index using these operators to be
built even when not all values in the column are suitable for the
extraction being indexed. Moreover, we already had multiple
inconsistencies between the path extraction operators and the simple
extraction operators, as well as inconsistencies between the JSON and
JSONB code paths. Adopt a uniform rule of returning NULL rather than
throwing an error when the JSON input does not have a structure that
permits the request to be satisfied.
Back-patch to 9.4. Update the release notes to list this as a behavior
change since 9.3.
|
|
|
|
|
|
|
|
|
|
|
|
| |
As 'ALTER TABLESPACE .. MOVE ALL' really didn't change the tablespace
but instead changed objects inside tablespaces, it made sense to
rework the syntax and supporting functions to operate under the
'ALTER (TABLE|INDEX|MATERIALIZED VIEW)' syntax and to be in
tablecmds.c.
Pointed out by Alvaro, who also suggested the new syntax.
Back-patch to 9.4.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
jsonb's #> operator segfaulted (dereferencing a null pointer) if the RHS
was a zero-length array, as reported in bug #11207 from Justin Van Winkle.
json's #> operator returns NULL in such cases, so for the moment let's
make jsonb act likewise.
Also add a bunch of regression test queries memorializing the -> and #>
operators' behavior for this and other corner cases.
There is a good argument for changing some of these behaviors, as they
are not very consistent with each other, and throwing an error isn't
necessarily a desirable behavior for operators that are likely to be
used in indexes. However, everybody can agree that a core dump is the
Wrong Thing, and we need test cases even if we decide to change their
expected output later.
|
|
|
|
|
|
|
|
| |
While the space is optional, it seems nicer to be consistent with what
you get if you do "SET search_path=...". SET always normalizes the
separator to be comma+space.
Christoph Martin
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 083d29c65b7897f90c70e6dc0a4240a5fa75c8f2.
The commit changed the code so that it causes an errors when
IDENTIFY_SYSTEM returns three columns. But which prevents us
from using the replication-related utilities against the server
with older version. This is not what we want. For that
compatibility, we allow the utilities to receive three columns
as the result of IDENTIFY_SYSTEM eventhough it actually returns
four columns in 9.4 or later.
Pointed out by Andres Freund.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
5a991ef8692ed0d170b44958a81a6bd70e90585 added new column into
the result of IDENTIFY_SYSTEM command. But it was not reflected into
several codes checking that result. Specifically though the number of
columns in the result was increased to 4, it was still compared with 3
in some replication codes.
Back-patch to 9.4 where the number of columns in IDENTIFY_SYSTEM
result was increased.
Report from Michael Paquier
|
|
|
|
|
|
|
| |
In support of this, have the MSVC build follow GNU make in preferring
GNUmakefile over Makefile when a directory contains both.
Michael Paquier, reviewed by MauMau.
|
|
|
|
|
|
|
|
|
|
|
| |
strncmp() is a specialized API unsuited for routine copying into
fixed-size buffers. On a system where the length of a single filename
can exceed MAXPGPATH, the pg_archivecleanup change prevents a simple
crash in the subsequent strlen(). Few filesystems support names that
long, and calling pg_archivecleanup with untrusted input is still not a
credible use case. Therefore, no back-patch.
David Rowley
|
|
|
|
|
|
|
|
|
| |
Move the functions within the file so that public interface functions come
first, followed by internal functions. Previously, be_tls_write was first,
then internal stuff, and finally the rest of the public interface, which
clearly didn't make much sense.
Per Andres Freund's complaint.
|
|
|
|
|
|
|
|
| |
Commit f30015b6d794c15d52abbb3df3a65081fbefb1ed made this happen for
timestamp and timestamptz, but it seems pretty inconsistent to not
do it for simple dates as well.
(In passing, I re-pgindent'd json.c.)
|
|
|
|
|
|
|
|
|
| |
PG_RETURN_BOOL() should only be used in functions following the V1 SQL
function API. This coding accidentally fails to fail since letting the
compiler coerce the Datum representation of bool back to plain bool
does give the right answer; but that doesn't make it a good idea.
Back-patch to older branches just to avoid unnecessary code divergence.
|
|
|
|
|
|
|
| |
This provides a small but worthwhile speedup when sorting text, at least
in cases to which the sortsupport machinery applies.
Robert Haas and Peter Geoghegan
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
parseRelOptions() tended to leak memory in the caller's context. Most
of the time this doesn't really matter since the caller's context is
at most query-lifespan, and the function won't be invoked very many times.
However, when testing with CLOBBER_CACHE_RECURSIVELY, the same relcache
entry can get rebuilt a *lot* of times in one query, leading to significant
intraquery memory bloat if it has any reloptions. Noted while
investigating a related report from Tomas Vondra.
In passing, get rid of some Asserts that are redundant with the one
done by deconstruct_array().
As with other patches to avoid leaks in CLOBBER_CACHE testing, it doesn't
really seem worth back-patching this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When replacing rd_indexlist, rd_indexattr, etc, we neglected to pfree any
old value of these fields. Under ordinary circumstances, the old value
would always be NULL, so this seemed reasonable enough. However, in cases
where we're rebuilding a system catalog's relcache entry and another cache
flush occurs on that same catalog meanwhile, it's possible for the field to
not be NULL when we return to the outer level, because we already refilled
it while recovering from the inner flush. This leads to a fairly small
session-lifespan leak in CacheMemoryContext. In real-world usage the leak
would be too small to notice; but in testing with CLOBBER_CACHE_RECURSIVELY
the leakage can add up to the point of causing OOM failures, as reported by
Tomas Vondra.
The issue has been there a long time, but it only seems worth fixing in
HEAD, like the previous fix in this area (commit 078b2ed291c758e7).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When doing logical decoding using START_LOGICAL_REPLICATION in a
walsender process the walsender sometimes was sending out keepalive
messages too frequently. Asking for feedback every time.
WalSndWaitForWal() sends out keepalive messages when it's waiting for
new WAL to be generated locally when it sees that the remote side
hasn't yet flushed WAL up to the local position. That generally is
good but causes problems if the remote side only writes but doesn't
flush changes yet. So check for both remote write and flush position.
Additionally we've asked for feedback to the keepalive message which
isn't warranted when waiting for WAL in contrast to preventing
timeouts because of wal_sender_timeout.
Complaint and patch by Steve Singer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When both postgresql.conf and postgresql.auto.conf have their own entry of
the same parameter, PostgreSQL uses the entry in postgresql.auto.conf because
it appears last in the configuration scan. IOW, the other entries which appear
earlier are ignored. But, previously, ProcessConfigFile() detected the invalid
settings of even those unused entries and emitted the error messages
complaining about them, at postmaster startup. Complaining about the entries
to ignore is basically useless.
This problem happened because ProcessConfigFile() was called twice at
postmaster startup and the first call read only postgresql.conf. That is, the
first call could check the entry which might be ignored eventually by
the second call which read both postgresql.conf and postgresql.auto.conf.
To work around the problem, this commit changes ProcessConfigFile so that
its first call processes only data_directory and the second one does all the
entries. It's OK to process data_directory in the first call because it's
ensured that data_directory doesn't exist in postgresql.auto.conf.
Back-patch to 9.4 where postgresql.auto.conf was added.
Patch by me. Review by Amit Kapila
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This refactoring is in preparation for adding support for other SSL
implementations, with no user-visible effects. There are now two #defines,
USE_OPENSSL which is defined when building with OpenSSL, and USE_SSL which
is defined when building with any SSL implementation. Currently, OpenSSL is
the only implementation so the two #defines go together, but USE_SSL is
supposed to be used for implementation-independent code.
The libpq SSL code is changed to use a custom BIO, which does all the raw
I/O, like we've been doing in the backend for a long time. That makes it
possible to use MSG_NOSIGNAL to block SIGPIPE when using SSL, which avoids
a couple of syscall for each send(). Probably doesn't make much performance
difference in practice - the SSL encryption is expensive enough to mask the
effect - but it was a natural result of this refactoring.
Based on a patch by Martijn van Oosterhout from 2006. Briefly reviewed by
Alvaro Herrera, Andreas Karlsson, Jeff Janes.
|
|
|
|
|
|
|
|
| |
There's actually no need for any special case for unknown-type literals,
since we only need to push the value through its output function and
unknownout() works fine. The code that was here was completely bizarre
anyway, and would fail outright in cases that should work, not to mention
suffering from some copy-and-paste bugs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix an obvious typo in json_build_object()'s complaint about invalid
number of arguments, and make the errhint a bit more sensible too.
Per discussion about how to word the improved hint, change the few places
in the documentation that refer to JSON object field names as "names" to
say "keys" instead, since that's what we've said in the vast majority of
places in the docs. Arguably "name" is more correct, since that's the
terminology used in RFC 7159; but we're stuck with "key" in view of the
naming of json_object_keys() so let's at least be self-consistent.
I adjusted a few code comments to match this as well, and failed to
resist the temptation to clean up some odd whitespace choices in the
same area, as well as a useless duplicate PG_ARGISNULL() check. There's
still quite a bit of code that uses the phrase "field name" in non-user-
visible ways, so I left those usages alone.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Such cases are disallowed by the SQL spec, and even if we wanted to allow
them, the semantics seem ambiguous: how should the FK columns be matched up
with the columns of a unique index? (The matching could be significant in
the presence of opclasses with different notions of equality, so this issue
isn't just academic.) However, our code did not previously reject such
cases, but instead would either fail to match to any unique index, or
generate a bizarre opclass-lookup error because of sloppy thinking in the
index-matching code.
David Rowley
|
|
|
|
|
|
|
|
| |
Previously, TOAST tables only required in the new cluster could cause
oid conflicts if they were auto-numbered and a later conflicting oid had
to be assigned.
Backpatch through 9.3
|
|
|
|
|
|
|
|
|
| |
This could be useful for datatypes like text, where we might want
to optimize for some collations but not others. However, this patch
doesn't introduce any new sortsupport functions that work this way;
it merely revises the code so that future patches may do so.
Patch by me. Review by Peter Geoghegan.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
parameter.
When more than one setting entries of same parameter exist in the
configuration file, PostgreSQL uses only entry appearing last in
configuration file scan. Since the other entries are not used,
ParseConfigFp() doesn't need to process them, but previously it did
that. This problematic behavior caused the configuration file scan
to detect invalid settings of unused entries (e.g., existence of
multiple entries of PGC_POSTMASTER parameter) and log the messages
complaining about them.
This commit changes the configuration file scan so that it processes
only last entry of each parameter.
Note that when multiple entries of same parameter exist both in
postgresql.conf and postgresql.auto.conf, unused entries in
postgresql.conf are still processed only at postmaster startup.
The problem has existed since old version, but a user is more likely
to encounter it since 9.4 where ALTER SYSTEM command was introduced.
So back-patch to 9.4.
Amit Kapila, slightly modified by me. Per report from Christoph Berg.
|
| |
|
|
|
|
|
|
|
| |
These messages are new in 9.4, which hasn't been released yet, so
back-patch to REL9_4_STABLE.
Daniele Varrazzo
|
|
|
|
|
|
|
|
|
|
|
| |
log_newpage is used by many indexams, in addition to heap, but for
historical reasons it's always been part of the heapam rmgr. Starting with
9.3, we have another WAL record type for logging an image of a page,
XLOG_FPI. Simplify things by moving log_newpage and log_newpage_buffer to
xlog.c, and switch to using the XLOG_FPI record type.
Bump the WAL version number because the code to replay the old HEAP_NEWPAGE
records is removed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When autovacuum is nominally off, we will still launch autovac workers
to vacuum tables that are at risk of XID wraparound. But after we'd done
that, an autovac worker would proceed to autovacuum every table in the
targeted database, if they meet the usual thresholds for autovacuuming.
This is at best pretty unexpected; at worst it delays response to the
wraparound threat. Fix it so that if autovacuum is nominally off, we
*only* do forced vacuums and not any other work.
Per gripe from Andrey Zhidenkov. This has been like this all along,
so back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
| |
InitProcess() relies on IsBackgroundWorker to decide whether the PGPROC
for a new backend should be taken from ProcGlobal's freeProcs or from
bgworkerFreeProcs. In EXEC_BACKEND builds, InitProcess() is called
sooner than in non-EXEC_BACKEND builds, and IsBackgroundWorker wasn't
getting initialized soon enough.
Report by Noah Misch. Diagnosis and fix by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 0ac5ad5134f2 removed an optimization in multixact.c that skipped
fetching members of MultiXactId that were older than our
OldestVisibleMXactId value. The reason this was removed is that it is
possible for multixacts that contain updates to be older than that
value. However, if the caller is certain that the multi does not
contain an update (because the infomask bits say so), it can pass this
info down to GetMultiXactIdMembers, enabling it to use the old
optimization.
Pointed out by Andres Freund in 20131121200517.GM7240@alap2.anarazel.de
|
|
|
|
|
|
|
|
|
|
| |
Testing for abortedness of a multixact member that's being frozen is
unnecessary: we only need to know whether the transaction is still in
progress or committed to determine whether it must be kept or not. This
let us simplify the code a bit and avoid a useless TransactionIdDidAbort
test.
Suggested by Andres Freund awhile back.
|