| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We may process relcache flush requests during transaction startup or
shutdown. In general it's not terribly safe to do catalog access at those
times, so the code's habit of trying to immediately revalidate unflushable
relcache entries is risky. Although there are no field trouble reports
that are positively traceable to this, we have been able to demonstrate
failure of the assertions recently added in RelationIdGetRelation() and
SearchCatCache(). On the other hand, it seems safe to just postpone
revalidation of the cache entry until we're inside a valid transaction.
The one case where this is questionable is where we're exiting a
subtransaction and the outer transaction is holding the relcache entry open
--- but if we made any significant changes to the rel inside such a
subtransaction, we've got problems anyway. There are mechanisms in place
to prevent that (to wit, locks for cross-session cases and
CheckTableNotInUse() for intra-session cases), so let's trust to those
mechanisms to keep us out of trouble.
|
| |
|
|
|
|
|
|
|
| |
Commit 42c80c696e9c8323841180029cc62741c21bd356 added an
Assert(IsTransactionState()) in SearchCatCache(), to catch
any code that thought it could do a catcache lookup outside
transactions. Extend the same idea to relcache lookups.
|
| |
|
|
|
|
| |
Report from Andres Freund.
|
|
|
|
| |
Craig Ringer.
|
|
|
|
|
|
|
|
| |
The temporary statistics files don't need to be included in the backup
because they are always reset at the beginning of the archive recovery.
This patch changes pg_basebackup so that it skips all files located in
$PGDATA/pg_stat_tmp or the directory specified by stats_temp_directory
parameter.
|
|
|
|
|
|
|
| |
Previously an input array string that started with a single-element
array dimension would then later accept a multi-dimensional segment.
BACKWARD INCOMPATIBILITY
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replication slots are a crash-safe data structure which can be created
on either a master or a standby to prevent premature removal of
write-ahead log segments needed by a standby, as well as (with
hot_standby_feedback=on) pruning of tuples whose removal would cause
replication conflicts. Slots have some advantages over existing
techniques, as explained in the documentation.
In a few places, we refer to the type of replication slots introduced
by this patch as "physical" slots, because forthcoming patches for
logical decoding will also have slots, but with somewhat different
properties.
Andres Freund and Robert Haas
|
|
|
|
|
|
|
|
| |
New checks include input, month/day/time internal adjustments, addition,
subtraction, multiplication, and negation. Also adjust docs to
correctly specify interval size in bytes.
Report from Rok Kralj
|
|
|
|
|
|
|
| |
They are in fact set in every case where they are needed, but the
compiler doesn't know that.
Per gripe from Tom Lane.
|
|
|
|
|
|
|
| |
This was a notable omission from the json functions added in 9.3 and
there have been numerous complaints about its absence.
Laurence Rowe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds an option, huge_tlb_pages, which allows requesting the
shared memory segment to be allocated using huge pages, by using the
MAP_HUGETLB flag in mmap(). This can improve performance.
The default is 'try', which means that we will attempt using huge pages,
and fall back to non-huge pages if it doesn't work. Currently, only Linux
has MAP_HUGETLB. On other platforms, the default 'try' behaves the same as
'off'.
In the passing, don't try to round the mmap() size to a multiple of
pagesize. mmap() doesn't require that, and there's no particular reason for
PostgreSQL to do that either. When using MAP_HUGETLB, however, round the
request size up to nearest 2MB boundary. This is to work around a bug in
some Linux kernel versions, but also to avoid wasting memory, because the
kernel will round the size up anyway.
Many people were involved in writing this patch, including Christian Kruse,
Richard Poole, Abhijit Menon-Sen, reviewed by Peter Geoghegan, Andres Freund
and me.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
json_build_array() and json_build_object allow for the construction of
arbitrarily complex json trees. json_object() turns a one or two
dimensional array, or two separate arrays, into a json_object of
name/value pairs, similarly to the hstore() function.
json_object_agg() aggregates its two arguments into a single json object
as name value pairs.
Catalog version bumped.
Andrew Dunstan, reviewed by Marko Tiikkaja.
|
|
|
|
|
|
| |
This view shows the statistics about the WAL archiver process's activity.
Gabriele Bartolini, reviewed by Michael Paquier, refactored a bit by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes it possible to store lwlocks as part of some other data
structure in the main shared memory segment, or in a dynamic shared
memory segment. There is still a main LWLock array and this patch does
not move anything out of it, but it provides necessary infrastructure
for doing that in the future.
This change is likely to increase the size of LWLockPadded on some
platforms, especially 32-bit platforms where it was previously only
16 bytes.
Patch by me. Review by Andres Freund and KaiGai Kohei.
|
|
|
|
|
|
| |
Fix integer overflow issue noted by Magnus Hagander, as well as a bunch
of other infelicities in commit ee1e5662d8d8330726eaef7d3110cb7add24d058
and its unreasonably large number of followups.
|
|
|
|
| |
Michael Paquier
|
|
|
|
| |
Michael Paquier
|
|
|
|
|
|
| |
Backpatch to 9.3.
Brar Piening.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since C99, it's been standard for printf and friends to accept a "z" size
modifier, meaning "whatever size size_t has". Up to now we've generally
dealt with printing size_t values by explicitly casting them to unsigned
long and using the "l" modifier; but this is really the wrong thing on
platforms where pointers are wider than longs (such as Win64). So let's
start using "z" instead. To ensure we can do that on all platforms, teach
src/port/snprintf.c to understand "z", and add a configure test to force
use of that implementation when the platform's version doesn't handle "z".
Having done that, modify a bunch of places that were using the
unsigned-long hack to use "z" instead. This patch doesn't pretend to have
gotten everyplace that could benefit, but it catches many of them. I made
an effort in particular to ensure that all uses of the same error message
text were updated together, so as not to increase the number of
translatable strings.
It's possible that this change will result in format-string warnings from
pre-C99 compilers. We might have to reconsider if there are any popular
compilers that will warn about this; but let's start by seeing what the
buildfarm thinks.
Andres Freund, with a little additional work by me
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some cases were still reporting errors and aborting, instead of a NOTICE
that the object was being skipped. This makes it more difficult to
cleanly handle pg_dump --clean, so change that to instead skip missing
objects properly.
Per bug #7873 reported by Dave Rolsky; apparently this affects a large
number of users.
Authors: Pavel Stehule and Dean Rasheed. Some tweaks by Álvaro Herrera
|
|
|
|
|
| |
This will help in preparation of clean patches for upcoming
json work.
|
|
|
|
|
|
|
|
|
| |
Unlike our other array functions, this considers the total number of
elements across all dimensions, and returns 0 rather than NULL when the
array has no elements. But it seems that both of those behaviors are
almost universally disliked, so hopefully that's OK.
Marko Tiikkaja, reviewed by Dean Rasheed and Pavel Stehule
|
|
|
|
|
|
|
|
|
| |
Commit a5bca4ef034f71175d46462963af2329d22068c2 accidentally changed
the semantics when the "skipping missing configuration file" is
emitted, because it forced OK to true instead of leaving the value
untouched.
Spotted by Tom Lane.
|
|
|
|
|
|
|
|
| |
Commit 138184adc5f7c60c184972e4d23f8cdb32aed77d plugged some but not
all of the leaks from commit 2a0c81a12c7e6c5ac1557b0f1f4a581f23fd4ca7.
This tightens things up some more.
Amit Kapila, per an observation by Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When there are consecutive spaces (or other non-format-code characters) in
the format, we should advance over exactly that many characters of input.
The previous coding mistakenly did a "skip whitespace" action between such
characters, possibly allowing more input to be skipped than the user
intended. We only need to skip whitespace just before an actual field.
This is really a bug fix, but given the minimal number of field complaints
and the risk of breaking applications coded to expect the old behavior,
let's not back-patch it.
Jeevan Chalke
|
|
|
|
| |
Suggested by Tom
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These changes should generally improve correctness/maintainability.
A nice side benefit is that several kilobytes move from initialized
data to text segment, allowing them to be shared across processes and
probably reducing copy-on-write overhead while forking a new backend.
Unfortunately this doesn't seem to help libpq in the same way (at least
not when it's compiled with -fpic on x86_64), but we can hope the linker
at least collects all nominally-const data together even if it's not
actually part of the text segment.
Also, make pg_encname_tbl[] static in encnames.c, since there seems
no very good reason for any other code to use it; per a suggestion
from Wim Lewis, who independently submitted a patch that was mostly
a subset of this one.
Oskari Saarenmaa, with some editorialization by me
|
|
|
|
|
|
|
| |
Expand the messages when log_connections is enabled to include the
fact that SSL is used and the SSL cipher information.
Dr. Andreas Kunert, review by Marko Kreen
|
|
|
|
|
|
|
| |
Move FreeConfigVariables() later to make sure ErrorConfFile is valid
when we use it, and get rid of an unnecessary string copy operation.
Amit Kapila, kibitzed by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per reports from Andres Freund and Luke Campbell, a server failure during
set_pglocale_pgservice results in a segfault rather than a useful error
message, because the infrastructure needed to use ereport hasn't been
initialized; specifically, MemoryContextInit hasn't been called.
One known cause of this is starting the server in a directory it
doesn't have permission to read.
We could try to prevent set_pglocale_pgservice from using anything that
depends on palloc or elog, but that would be messy, and the odds of future
breakage seem high. Moreover there are other things being called in main.c
that look likely to use palloc or elog too --- perhaps those things
shouldn't be there, but they are there today. The best solution seems to
be to move the call of MemoryContextInit to very early in the backend's
real main() function. I've verified that an elog or ereport occurring
immediately after that is now capable of sending something useful to
stderr.
I also added code to elog.c to print something intelligible rather than
just crashing if MemoryContextInit hasn't created the ErrorContext.
This could happen if MemoryContextInit itself fails (due to malloc
failure), and provides some future-proofing against someone trying to
sneak in new code even earlier in server startup.
Back-patch to all supported branches. Since we've only heard reports of
this type of failure recently, it may be that some recent change has made
it more likely to see a crash of this kind; but it sure looks like it's
broken all the way back.
|
|
|
|
|
|
|
|
|
|
| |
Buildfarm member dunlin has been crashing since commit 8b49a60, but other
machines seem fine with that code. It turns out that removing the local
variables in ordered_set_startup() that are copies of fields in "qstate"
dodges the problem. This might cost a few cycles on register-rich
machines, but it's probably a wash on others, and in any case this code
isn't performance-critical. Thanks to Jeremy Drake for off-list
investigation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the transition functions, we don't really need to recheck this after the
first call. I had been feeling paranoid about possibly getting a non-null
argument value in some other context; but it's probably game over anyway
if we have a non-null "internal" value that's not what we are expecting.
In the final functions, the general convention in pre-existing final
functions seems to be that an Assert() is good enough, so do it like that
here too.
This seems to save a few tenths of a percent of overall query runtime,
which isn't much, but still it's just overhead if there's not a plausible
case where the checks would fire.
|
|
|
|
|
| |
Update all files in head, and files COPYRIGHT and legal.sgml in all back
branches.
|
|
|
|
|
|
|
| |
MSVC doesn't know that elog(ERROR) doesn't return, and gives a warning about
missing return. Silence that.
Amit Kapila
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The initial commit of ordered-set aggregates just did all the setup work
afresh each time the aggregate function is started up. But in a GROUP BY
query, the catalog lookups need not be repeated for each group, since the
column datatypes and sort information won't change. When there are many
small groups, this makes for a useful, though not huge, performance
improvement. Per suggestion from Andrew Gierth.
Profiling of these cases suggests that it might be profitable to avoid
duplicate lookups within tuplesort startup as well; but changing the
tuplesort APIs would have much broader impact, so I left that for
another day.
|
|
|
|
|
|
|
|
|
|
| |
The result is an int less than, equal to, or greater than zero, in the
style of memcmp (and, in fact, exactly the output of memcmp in some cases).
This comment previously said -1, 1, or 0, which was an overspecification,
as noted by Emre Hasegeli. All of the existing callers appear to be fine
with the actual behavior, so just fix the comment.
In passing, improve infelicitous formatting of some call sites.
|
|
|
|
| |
Michael Paquier
|
|
|
|
| |
From: Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of looking for characters that aren't valid in JSON numbers, we
simply pass the output string through the JSON number parser, and if it
fails the string is quoted. This means among other things that money and
domains over money will be quoted correctly and generate valid JSON.
Fixes bug #8676 reported by Anderson Cristian da Silva.
Backpatched to 9.2 where JSON generation was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bug would only show up if the C sockaddr structure contained
zero in the first byte for a valid address; otherwise it would
fail to fail, which is probably why it went unnoticed for so long.
Patch submitted by Joel Jacobson after seeing an article by Andrey
Karpov in which he reports finding this through static code
analysis using PVS-Studio. While I was at it I moved a definition
of a local variable referenced in the buggy code to a more local
context.
Backpatch to all supported branches.
|
|
|
|
| |
Per report from Jeff Davis.
|
|
|
|
|
|
| |
Most other range operations seem to work all right on domains,
but this one not so much, at least not since commit 918eee0c.
Per bug #8684 from Brett Neumeier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of changing the tuple xmin to FrozenTransactionId, the combination
of HEAP_XMIN_COMMITTED and HEAP_XMIN_INVALID, which were previously never
set together, is now defined as HEAP_XMIN_FROZEN. A variety of previous
proposals to freeze tuples opportunistically before vacuum_freeze_min_age
is reached have foundered on the objection that replacing xmin by
FrozenTransactionId might hinder debugging efforts when things in this
area go awry; this patch is intended to solve that problem by keeping
the XID around (but largely ignoring the value to which it is set).
Third-party code that checks for HEAP_XMIN_INVALID on tuples where
HEAP_XMIN_COMMITTED might be set will be broken by this change. To fix,
use the new accessor macros in htup_details.h rather than consulting the
bits directly. HeapTupleHeaderGetXmin has been modified to return
FrozenTransactionId when the infomask bits indicate that the tuple is
frozen; use HeapTupleHeaderGetRawXmin when you already know that the
tuple isn't marked commited or frozen, or want the raw value anyway.
We currently do this in routines that display the xmin for user consumption,
in tqual.c where it's known to be safe and important for the avoidance of
extra cycles, and in the function-caching code for various procedural
languages, which shouldn't invalidate the cache just because the tuple
gets frozen.
Robert Haas and Andres Freund
|
|
|
|
| |
Sawada Masahiko
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just as backends must clean up their shared memory state (releasing
lwlocks, buffer pins, etc.) before exiting, they must also perform
any similar cleanups related to dynamic shared memory segments they
have mapped before unmapping those segments. So add a mechanism to
ensure that.
Existing on_shmem_exit hooks include both "user level" cleanup such
as transaction abort and removal of leftover temporary relations and
also "low level" cleanup that forcibly released leftover shared
memory resources. On-detach callbacks should run after the first
group but before the second group, so create a new before_shmem_exit
function for registering the early callbacks and keep on_shmem_exit
for the regular callbacks. (An earlier draft of this patch added an
additional argument to on_shmem_exit, but that had a much larger
footprint and probably a substantially higher risk of breaking third
party code for no real gain.)
Patch by me, reviewed by KaiGai Kohei and Andres Freund.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a tuple was locked by transaction A, and transaction B updated it,
the new version of the tuple created by B would be locked by A, yet
visible only to B; due to an oversight in HeapTupleSatisfiesUpdate, the
lock held by A wouldn't get checked if transaction B later deleted (or
key-updated) the new version of the tuple. This might cause referential
integrity checks to give false positives (that is, allow deletes that
should have been rejected).
This is an easy oversight to have made, because prior to improved tuple
locks in commit 0ac5ad5134f it wasn't possible to have tuples created by
our own transaction that were also locked by remote transactions, and so
locks weren't even considered in that code path.
It is recommended that foreign keys be rechecked manually in bulk after
installing this update, in case some referenced rows are missing with
some referencing row remaining.
Per bug reported by Daniel Wood in
CAPweHKe5QQ1747X2c0tA=5zf4YnS2xcvGf13Opd-1Mq24rF1cQ@mail.gmail.com
|