| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
DST law changes in Brazil, Cuba, Fiji, Palestine, Russia, Samoa.
Historical corrections for Alaska and British East Africa.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous coding, callers were faced with an awkward choice:
look up the name, do permissions checks, and then lock the table; or
look up the name, lock the table, and then do permissions checks.
The first choice was wrong because the results of the name lookup
and permissions checks might be out-of-date by the time the table
lock was acquired, while the second allowed a user with no privileges
to interfere with access to a table by users who do have privileges
(e.g. if a malicious backend queues up for an AccessExclusiveLock on
a table on which AccessShareLock is already held, further attempts
to access the table will be blocked until the AccessExclusiveLock
is obtained and the malicious backend's transaction rolls back).
To fix, allow callers of RangeVarGetRelid() to pass a callback which
gets executed after performing the name lookup but before acquiring
the relation lock. If the name lookup is retried (because
invalidation messages are received), the callback will be re-executed
as well, so we get the best of both worlds. RangeVarGetRelid() is
renamed to RangeVarGetRelidExtended(); callers not wishing to supply
a callback can continue to invoke it as RangeVarGetRelid(), which is
now a macro. Since the only one caller that uses nowait = true now
passes a callback anyway, the RangeVarGetRelid() macro defaults nowait
as well. The callback can also be used for supplemental locking - for
example, REINDEX INDEX needs to acquire the table lock before the index
lock to reduce deadlock possibilities.
There's a lot more work to be done here to fix all the cases where this
can be a problem, but this commit provides the general infrastructure
and fixes the following specific cases: REINDEX INDEX, REINDEX TABLE,
LOCK TABLE, and and DROP TABLE/INDEX/SEQUENCE/VIEW/FOREIGN TABLE.
Per discussion with Noah Misch and Alvaro Herrera.
|
|
|
|
|
|
| |
On a platform that isn't supplying __FILE__, previous coding would either
crash or give a stale result for the filename string. Not sure how likely
that is, but the original code catered for it, so let's keep doing so.
|
|
|
|
|
|
|
| |
In vpath builds, the __FILE__ macro that is used in verbose error
reports contains the full absolute file name, which makes the error
messages excessively verbose. So keep only the base name, thus
matching the behavior of non-vpath builds.
|
|
|
|
| |
Per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Force the transaction isolation level to READ COMMITTED in autovacuum
worker and launcher processes. There is no benefit to using a higher
isolation level, and doing so could result in delaying foreground
transactions (or maybe even causing unnecessary serialization failures?).
Noted by Dan Ports.
Also, make sure we disable zero_damaged_pages and statement_timeout in
the autovac launcher, not only workers. Now that the launcher can run
transactions, these settings could affect its behavior, and it seems
like the same arguments apply to the launcher as the workers.
|
|
|
|
|
|
| |
Fix entirely broken handling of va_list printing routines, update some
out-of-date comments, fix some bogus inclusion orders, fix NLS declarations,
fix missed realloc calls.
|
|
|
|
| |
Simple extension of previous patch for CHECK constraints.
|
|
|
|
| |
pg_dumpall to use the same memory allocation functions as the others.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This should make it easier to identify which row is problematic when an
insert or update is processing many rows.
The formatting is similar to that for unique-index violation messages,
except that we limit field widths to 64 bytes since otherwise the message
could get unreasonably long. (In particular, there's currently no attempt
to quote or escape field values that contain commas etc.)
Jan Kundrát, reviewed by Royce Ausburn, somewhat rewritten by me.
|
| |
|
|
|
|
|
|
|
| |
The old expression sed 's,$(srcdir),python3,' would normally resolve
as sed 's,.,python3,', which is not really what we wanted. While it
doesn't actually break anything right now, it's still wrong, so put in
a bit more work to make it more robust.
|
| |
|
| |
|
|
|
|
|
|
| |
Moving the code two full tab stops to the right requires rethinking of
cosmetic code layout choices, which pgindent isn't really able to do for
us. Whitespace and comment adjustments only, no code changes.
|
|
|
|
|
|
|
|
|
|
| |
While the deletion in itself wouldn't break things, any further creation
of objects in the script would result in dangling pg_depend entries being
added by recordDependencyOnCurrentExtension(). An example from Phil
Sorber convinced me that this is just barely likely enough to be worth
expending a couple lines of code to defend against. The resulting error
message might be confusing, but it's better than leaving corrupted catalog
contents for the user to deal with.
|
| |
|
|
|
|
|
|
|
|
|
| |
This function has now grown enough cases that a switch seems appropriate.
This results in a measurable speed improvement on some platforms, and
should certainly not hurt. The code's in need of a pgindent run now,
though.
Andres Freund
|
|
|
|
|
|
|
|
| |
The server name for a foreign table was not quoted at need, as per report
from Ronan Dunklau. Also, queries related to FDW options were inadequately
schema-qualified in places where the search path isn't just pg_catalog, and
were inconsistently formatted everywhere, and we didn't always check that
we got the expected number of rows from them.
|
|
|
|
|
|
|
| |
This can be used to remove the overhead of SSL compression on
fast networks.
Laurenz Albe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The EvalPlanQual machinery assumes that whole-row Vars generated for the
outputs of non-table RTEs will be of composite types. However, for the
case where the RTE is a function call returning a scalar type, we were
doing the wrong thing, as a result of sharing code with a parser case
where the function's scalar output is wanted. (Or at least, that's what
that case has done historically; it does seem a bit inconsistent.)
To fix, extend makeWholeRowVar's API so that it can support both use-cases.
This fixes Belinda Cussen's report of crashes during concurrent execution
of UPDATEs involving joins to the result of UNNEST() --- in READ COMMITTED
mode, we'd run the EvalPlanQual machinery after a conflicting row update
commits, and it was expecting to get a HeapTuple not a scalar datum from
the "wholerowN" variable referencing the function RTE.
Back-patch to 9.0 where the current EvalPlanQual implementation appeared.
In 9.1 and up, this patch also fixes failure to attach the correct
collation to the Var generated for a scalar-result case. An example:
regression=# select upper(x.*) from textcat('ab', 'cd') x;
ERROR: could not determine which collation to use for upper() function
|
|
|
|
|
|
|
|
| |
This fixes a longstanding but up to now benign bug in the way pg_dumpall
was built. The bug was exposed by recent code adjustments. The Makefile
does not use $(OBJS) to build pg_dumpall, so this fix removes their source
files from the pg_dumpall object and adds in the one source file it
consequently needs.
|
|
|
|
|
|
|
| |
Use of a randomly chosen large value was never exactly graceful, and
now that there are penalty functions that are intentionally using infinity,
it doesn't seem like a good idea for null-vs-not-null to be using something
less.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the original implementation, a range-contained-by search had to scan
the entire index because an empty range could be lurking anywhere.
Improve that by adding a flag to upper GiST entries that says whether the
represented subtree contains any empty ranges.
Also, make a simple mod to the penalty function to discourage empty ranges
from getting pushed into subtrees without any. This needs more work, and
the picksplit function should be taught about it too, but that code can be
improved without causing an on-disk compatibility break; so we'll leave it
for another day.
Since we're breaking on-disk compatibility of range values anyway, I took
the opportunity to reorganize the range flags bits; the unused
RANGE_xB_NULL bits are now adjacent, which might open the door for using
them in some other way later.
In passing, remove the GiST range opclass entry for <>, which doesn't seem
like it can really be indexed usefully.
Alexander Korotkov, with some editorializing by Tom
|
|
|
|
|
|
|
|
| |
It runs the regression tests, runs pg_upgrade on the populated
database, and compares the before and after dumps. While not actually
a cross-version upgrade, this does detect omissions and bugs in the
involved tools from time to time. It's also possible to do a
cross-version upgrade by manually supplying parameters.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original coding was
var->value = (Datum) state;
which is bogus, and then in commit 2f0f7b4bce13e68394543728801ef011fd82fac6
it was "corrected" to
var->value = PointerGetDatum(state);
which is a faithful translation but still wrong.
This seems purely cosmetic, though, so no need for a back-patch.
Pavel Stehule
|
| |
|
|
|
|
|
| |
with its original functions. The previous function migration would
cause too many difficulties in back-patching.
|
| |
|
|
|
|
|
|
|
|
|
| |
distro version of perl.
David Wheeler and Alex Hunsaker.
Backpatch to 9.1 where it applies cleanly. A simple workaround is available for earlier
branches, and further effort doesn't seem warranted.
|
|
|
|
|
|
|
| |
In the cases where the result of the called proc is negated, we should
explicitly test both inputs for empty, to ensure we'll never return "true"
for an unsatisfiable query. In other cases we can rely on the called proc
to say the right thing.
|
|
|
|
|
|
| |
Amit Khandekar and Alex Hunsaker.
Backpatched to 9.1 where the problem first occurred.
|
|
|
|
|
|
| |
initial null string.
Per report from Robert Haas in testing psql \ir.
|
|
|
|
| |
Jeff Janes
|
|
|
|
|
|
|
|
|
| |
Same bug as reported by Thom Brown for check constraints on tables: the
constraint must be dumped separately from the domain, otherwise it is
restored before the data and thus prevents potentially-violating data
from being loaded in the first place.
Per Dean Rasheed
|
|
|
|
| |
ignoring errors and call-site error checking.
|
|
|
|
|
|
| |
fixing pg_dump to properly preserve such indexes.
Backpatch to 9.1 and 9.0 (where the bug was introduced).
|
|
|
|
|
|
|
|
|
| |
This adds some I/O stats to the logging of autovacuum (when the
operation takes long enough that log_autovacuum_min_duration causes it
to be logged), so that it is easier to tune. Notably, it adds buffer
I/O counts (hits, misses, dirtied) and read and write rate.
Authors: Greg Smith and Noah Misch
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A simple thinko in ginRedoUpdateMetapage, namely failing to increment a
loop counter, led to inserting records into the last pending-list page in
the wrong order (the opposite of that intended). So far as I can tell,
this would not upset the code that eventually flushes pending items into
the main part of the GIN index. But it did break the code that searched
the pending list for matches, resulting in transient failure to find
matching entries during index lookups, as illustrated in bug #6307 from
Maksym Boguk.
Back-patch to 8.4 where the incorrect code was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This speeds up snapshot-taking and reduces ProcArrayLock contention.
Also, the PGPROC (and PGXACT) structures used by two-phase commit are
now allocated as part of the main array, rather than in a separate
array, and we keep ProcArray sorted in pointer order. These changes
are intended to minimize the number of cache lines that must be pulled
in to take a snapshot, and testing shows a substantial increase in
performance on both read and write workloads at high concurrencies.
Pavan Deolasee, Heikki Linnakangas, Robert Haas
|
|
|
|
|
|
|
|
|
|
|
| |
The WITH [NO] DATA option was not supported, nor the ability to specify
replacement column names; the former limitation wasn't even documented, as
per recent complaint from Naoya Anzai. Fix by moving the responsibility
for supporting these options into the executor. It actually takes less
code this way ...
catversion bump due to change in representation of IntoClause, which might
affect stored rules.
|
|
|
|
|
|
|
| |
This allows possibly violating data to be imported before the constraint
is installed.
Bug reported by Thom Brown
|
|
|
|
|
|
|
| |
exception handler. This was a regression in 9.1, when the capability
to catch specific SPI errors was added, so backpatch to 9.1.
Mika Eloranta, with some editing by Jan Urbański.
|
|
|
|
|
|
|
|
|
|
|
| |
The original coding would not work for discrete ranges in which the
canonicalization rule is to produce symmetric boundaries (either [] or ()
style), as noted by Jeff Davis. Florian Pflug pointed out that we could
fix that by invoking the canonicalization function to see if the range
"between" the two given ranges normalizes to empty. This implementation
of Florian's idea is a tad slower than the original code, but only in the
case where there actually is a canonicalization function --- if not, it's
essentially the same logic as before.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since range types can be created by non-superusers, we need to consider
their permissions. Ideally we'd check this when the type is used, not
when it's created, but that seems like much more trouble than it's worth.
The existing restriction that the support functions be immutable already
prevents most cases where an unauthorized call to a function might be
thought a security issue, and the fact that the user has no access to
the results of the system's calls to subtype_diff closes off the other
plausible reason for concern. So this check is basically pro-forma,
but let's make it anyway.
|
|
|
|
|
|
|
|
|
| |
It's not clear that a per-datatype typanalyze function would be any more
useful than a generic typanalyze for ranges. What *is* clear is that
letting unprivileged users select typanalyze functions is a crash risk or
worse. So remove the option from CREATE TYPE AS RANGE, and instead put in
a generic typanalyze function for ranges. The generic function does
nothing as yet, but hopefully we'll improve that before 9.2 release.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per discussion, the zero-argument forms aren't really worth the catalog
space (just write 'empty' instead). The one-argument forms have some use,
but they also have a serious problem with looking too much like functional
cast notation; to the point where in many real use-cases, the parser would
misinterpret what was wanted.
Committing this as a separate patch, with the thought that we might want
to revert part or all of it if we can think of some way around the cast
ambiguity.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement these tests directly instead of constructing a singleton range
and then applying range-contains. This saves a range serialize/deserialize
cycle as well as a couple of redundant bound-comparison steps, and adds
very little code on net.
Remove elem_contained_by_range from the GiST opclass: it doesn't belong
there because there is no way to use it in an index clause (where the
indexed column would have to be on the left). Its commutator is in the
opclass, and that's what counts.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the normal course of events, this matters only if ALTER DEFAULT
PRIVILEGES has been used to revoke default INSERT permission. Whether
or not the new behavior is more or less likely to be what the user wants
when dealing only with the built-in privilege facilities is arguable,
but it's clearly better when using a loadable module such as sepgsql
that may use the hook in ExecCheckRTPerms to enforce additional
permissions checks.
KaiGai Kohei, reviewed by Albe Laurenz
|
|
|
|
|
|
|
|
|
|
| |
Per discussion, relax the range input/construction rules so that the
only hard error is lower bound > upper bound. Cases where the lower
bound is <= upper bound, but the range nonetheless normalizes to empty,
are now permitted.
Fix core dump in range_adjacent when bounds are infinite. Marginal
cleanup of regression test cases, some more code commenting.
|