| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
|
| |
Previously set_default_effective_cache_size() could not handle fork,
non-fork, and bootstrap cases.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The TYPEALIGN macro, and the related ones like MAXALIGN, don't work with
values larger than intptr_t, because TYPEALIGN casts the argument to
intptr_t to do the arithmetic. That's not a problem when dealing with
pointers or lengths or offsets related to pointers, but the XLogInsert
scaling patch added a call to MAXALIGN with an XLogRecPtr argument.
To fix, add wider variants of the macros, called TYPEALIGN64 and MAXALIGN64,
which are just like the existing variants but work with uint64 instead of
intptr_t.
Report and patch by David Rowley, analysis by Andres Freund.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. In heap_hot_search_buffer(), the PredicateLockTuple() call is passed
wrong offset number. heapTuple->t_self is set to the tid of the first
tuple in the chain that's visited, not the one actually being read.
2. CheckForSerializableConflictIn() uses the tuple's t_ctid field
instead of t_self to check for exiting predicate locks on the tuple. If
the tuple was updated, but the updater rolled back, t_ctid points to the
aborted dead tuple.
Reported by Hannu Krosing. Backpatch to 9.1.
|
| |
|
|
|
|
|
|
| |
Otherwise, we access already-freed memory. Oops.
Report by Michael Paquier. Fix by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a tuple was frozen while its predicate locks mattered,
read-write dependencies could be missed, resulting in failure to
detect conflicts which could lead to anomalies in committed
serializable transactions.
This field was added to the tag when we still thought that it was
necessary to carry locks forward to a new version of an updated
row. That was later proven to be unnecessary, which allowed
simplification of the code, but elimination of xmin from the tag
was missed at the time.
Per report and analysis by Heikki Linnakangas.
Backpatch to 9.1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Clamp the minimum sleep time during immediate shutdown or crash to a
minimum of zero, not a maximum of one second. The previous code could
result in a negative sleep time, leading to failure in select() calls.
Also, on crash recovery, reset AbortStartTime as soon as SIGKILL is sent
or abort processing has commenced instead of waiting until the startup
process completes. Reset AbortStartTime as soon as SIGKILL is sent,
too, to avoid doing that repeatedly.
Per trouble report from Jeff Janes on
CAMkU=1xd3=wFqZwwuXPWe4BQs3h1seYo8LV9JtSjW5RodoPxMg@mail.gmail.com
Author: MauMau
|
|
|
|
|
|
|
| |
Issue error for SET LOCAL/CONSTRAINTS/TRANSACTION outside a transaction
block, as they have no effect.
Per suggestion from Morten Hustveit
|
|
|
|
| |
Report from Kevin Hale Boyes.
|
|
|
|
|
|
|
| |
DISCARD ALL will now discard cached sequence information, as well.
Fabrízio de Royes Mello, reviewed by Zoltán Böszörményi, with some
further tweaks by me.
|
|
|
|
|
|
|
|
|
|
|
| |
It makes for cleaner code to have separate Get/Add functions for PostingItems
and ItemPointers. A few callsites that have to deal with both types need to
be duplicated because of this, but all the callers have to know which one
they're dealing with anyway. Overall, this reduces the amount of casting
required.
Extracted from Alexander Korotkov's larger patch to change the data page
format.
|
| |
|
|
|
|
|
|
| |
This is in support of a future REINDEX CONCURRENTLY feature.
Michael Paquier
|
|
|
|
|
|
|
|
|
|
|
| |
Previously bms_add_member() would palloc a whole-new copy of the existing
set, copy the words, and pfree the old one. repalloc() is potentially much
faster, and more importantly, this is less surprising if CurrentMemoryContext
is not the same as the context the old set is in. bms_add_member() still
allocates a new bitmapset in CurrentMemoryContext if NULL is passed as
argument, but that is a lot less likely to induce bugs.
Nicholas White.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lo_open registers the currently active snapshot, and checks if the
large object exists after that. Normally, snapshots registered by lo_open
are unregistered at end of transaction when the lo descriptor is closed, but
if we error out before the lo descriptor is added to the list of open
descriptors, it is leaked. Fix by moving the snapshot registration to after
checking if the large object exists.
Reported by Pavel Stehule. Backpatch to 8.4. The snapshot registration
system was introduced in 8.4, so prior versions are not affected (and not
supported, anyway).
|
|
|
|
|
| |
David Rowley, after a suggestion from Heikki Linnakangas. Reviewed by
Albe Laurenz, and further edited by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a rare race condition, when a transaction that inserted a tuple
aborts while vacuum is processing the page containing the inserted tuple.
Vacuum prunes the page first, which normally removes any dead tuples, but
if the inserting transaction aborts right after that, the loop after
pruning will see a dead tuple and remove it instead. That's OK, but if the
page is on a table with no indexes, and the page becomes completely empty
after removing the dead tuple (or tuples) on it, it will be immediately
marked as all-visible. That's OK, but the sanity check in vacuum would
throw a warning because it thinks that the page contains dead tuples and
was nevertheless marked as all-visible, even though it just vacuumed away
the dead tuples and so it doesn't actually contain any.
Spotted this while reading the code. It's difficult to hit the race
condition otherwise, but can be done by putting a breakpoint after the
heap_page_prune() call.
Backpatch all the way to 8.4, where this code first appeared.
|
|
|
|
|
|
|
|
| |
B-tree operators are not allowed to leak memory into the current memory
context. Range_cmp leaked detoasted copies of the arguments. That caused
a quick out-of-memory error when creating an index on a range column.
Reported by Marian Krucina, bug #8468.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, arbitray system columns could be mentioned in table
constraints, but they were not correctly checked at runtime, because
the values weren't actually set correctly in the tuple. Since it
seems easy enough to initialize the table OID properly, do that,
and continue allowing that column, but disallow the rest unless and
until someone figures out a way to make them work properly.
No back-patch, because this doesn't seem important enough to take the
risk of destabilizing the back branches. In fact, this will pose a
dump-and-reload hazard for those upgrading from previous versions:
constraints that were accepted before but were not correctly enforced
will now either be enforced correctly or not accepted at all. Either
could result in restore failures, but in practice I think very few
users will notice the difference, since the use case is pretty
marginal anyway and few users will be relying on features that have
not historically worked.
Amit Kapila, reviewed by Rushabh Lathia, with doc changes by me.
|
|
|
|
| |
Per complaint from Andrew Gierth.
|
|
|
|
| |
Etsuro Fujita
|
|
|
|
|
|
| |
This has been unused since commit 8563ccae2caf.
Noted by Antonin Houska
|
|
|
|
|
|
|
|
|
| |
It seems to make more sense to use "cutoff multixact" terminology
throughout the backend code; "freeze" is associated with replacing of an
Xid with FrozenTransactionId, which is not what we do for MultiXactIds.
Andres Freund
Some adjustments by Álvaro Herrera
|
|
|
|
| |
Bernd Helmle
|
|
|
|
|
|
|
|
|
| |
Once the administrator has called for an immediate shutdown or a backend
crash has triggered a reinitialization, no mere SIGINT or SIGTERM should
change that course. Such derailment remains possible when the signal
arrives before quickdie() blocks signals. That being a narrow race
affecting most PostgreSQL signal handlers in some way, leave it for
another patch. Back-patch this to all supported versions.
|
|
|
|
|
| |
Before, it would only show schemas that the current user owns. Per
discussion, the new behavior is more useful and consistent for PostgreSQL.
|
|
|
|
|
|
|
| |
This allows a 32-bit field to represent an *optional* command ID
without a separate flag bit.
Andres Freund
|
|
|
|
|
|
|
|
| |
Doing so was helpful for some Valgrind usage and distracting for other
usage. One can achieve the same effect by changing log_statement and
pointing both PostgreSQL and Valgrind logging to stderr.
Per gripe from Andres Freund.
|
|
|
|
|
|
|
|
| |
Commit 95ef6a344821655ce4d0a74999ac49dd6af6d342 removed the
ability to create rules on an individual column as of 7.3, but
left some residual code which has since been useless. This cleans
up that dead code without any change in behavior other than
dropping the useless column from the catalog.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the hash table backing a catalog cache becomes too full (fillfactor > 2),
enlarge it. A new buckets array, double the size of the old, is allocated,
and all entries in the old hash are moved to the right bucket in the new
hash.
This has two benefits. First, cache lookups don't get so expensive when
there are lots of entries in a cache, like if you access hundreds of
thousands of tables. Second, we can make the (initial) sizes of the caches
much smaller, which saves memory.
This patch dials down the initial sizes of the catcaches. The new sizes are
chosen so that a backend that only runs a few basic queries still won't need
to enlarge any of them.
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 269e780822abb2e44189afaccd6b0ee7aefa7ddd
and commit 5b571bb8c8d2bea610e01ae1ee7bc05adcfff528.
Unfortunately, the initial patch had insufficient performance testing,
and resulted in a regression.
Per report by Thom Brown.
|
|
|
|
|
|
| |
Previous text was "No description available".
Tianyin Xu
|
|
|
|
|
|
|
| |
Performance testing shows that if the insertpos_lck spinlock and the fields
that it protects are on the same cache line with other variables that are
frequently accessed, the false sharing can hurt performance a lot. Keep
them apart by adding some padding.
|
|
|
|
| |
Andres Freund
|
|
|
|
|
|
|
|
| |
This GUC context value was once only used by ALTER DATABASE SET and
ALTER USER SET. That's not true anymore, though, so rewrite the
comments to be a bit more general.
Patch in HEAD only, since this is just an internal documentation issue.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous coding attempted to activate all the GUC settings specified
in SET clauses, so that the function validator could operate in the GUC
environment expected by the function body. However, this is problematic
when restoring a dump, since the SET clauses might refer to database
objects that don't exist yet. We already have the parameter
check_function_bodies that's meant to prevent forward references in
function definitions from breaking dumps, so let's change CREATE FUNCTION
to not install the SET values if check_function_bodies is off.
Authors of function validators were already advised not to make any
"context sensitive" checks when check_function_bodies is off, if indeed
they're checking anything at all in that mode. But extend the
documentation to point out the GUC issue in particular.
(Note that we still check the SET clauses to some extent; the behavior
with !check_function_bodies is now approximately equivalent to what ALTER
DATABASE/ROLE have been doing for awhile with context-dependent GUCs.)
This problem can be demonstrated in all active branches, so back-patch
all the way.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
|
|
|
|
| |
Also line-wrap an over-wide line in a comment that's ignored by pgindent.
|
| |
|
|
|
|
|
|
|
|
| |
Failing to do so can cause queries to return wrong data, error out or crash.
This requires adding a new binaryheap_reset() method to binaryheap.c,
but that probably should have been there anyway.
Per bug #8410 from Terje Elde. Diagnosis and patch by Andres Freund.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Using the infrastructure provided by this patch, it's possible either
to wait for the startup of a dynamically-registered background worker,
or to poll the status of such a worker without waiting. In either
case, the current PID of the worker process can also be obtained.
As usual, worker_spi is updated to demonstrate the new functionality.
Patch by me. Review by Andres Freund.
|
|
|
|
|
|
|
|
| |
As noted by Tom Lane, commit 813fb0315587d32e3b77af1051a0ef517d187763
was overly optimistic about how safe it is to concurrently change
enumsortorder values under MVCC catalog scan semantics. Restore
some of the previous text, with hopefully-correct adjustments for
the new state of play.
|
|
|
|
| |
Andres Freund; bug detected by valgrind
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous coding in plancache.c essentially used 10% of the estimated
runtime as its cost estimate for planning. This can be pretty bogus,
especially when the estimated runtime is very small, such as in a simple
expression plan created by plpgsql, or a simple INSERT ... VALUES.
While we don't have a really good handle on how planning time compares
to runtime, it seems reasonable to use an estimate based on the number of
relations referenced in the query, with a rather large multiplier. This
patch uses 1000 * cpu_operator_cost * (nrelations + 1), so that even a
trivial query will be charged 1000 * cpu_operator_cost for planning.
This should address the problem reported by Marc Cousin and others that
9.2 and up prefer custom plans in cases where the planning time greatly
exceeds what can be saved.
|