| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
| |
Previously very long field masks for floats could access memory
beyond the existing buffer allocated to hold the result.
Reported by Andres Freund and Peter Geoghegan. Backpatch to all
supported versions.
Security: CVE-2015-0241
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've been trying to support \u0000 in JSON values since commit
78ed8e03c67d7333, and have introduced increasingly worse hacks to try to
make it work, such as commit 0ad1a816320a2b53. However, it fundamentally
can't work in the way envisioned, because the stored representation looks
the same as for \\u0000 which is not the same thing at all. It's also
entirely bogus to output \u0000 when de-escaped output is called for.
The right way to do this would be to store an actual 0x00 byte, and then
throw error only if asked to produce de-escaped textual output. However,
getting to that point seems likely to take considerable work and may well
never be practical in the 9.4.x series.
To preserve our options for better behavior while getting rid of the nasty
side-effects of 0ad1a816320a2b53, revert that commit in toto and instead
throw error if \u0000 is used in a context where it needs to be de-escaped.
(These are the same contexts where non-ASCII Unicode escapes throw error
if the database encoding isn't UTF8, so this behavior is by no means
without precedent.)
In passing, make both the \u0000 case and the non-ASCII Unicode case report
ERRCODE_UNTRANSLATABLE_CHARACTER / "unsupported Unicode escape sequence"
rather than claiming there's something wrong with the input syntax.
Back-patch to 9.4, where we have to do something because 0ad1a816320a2b53
broke things for many cases having nothing to do with \u0000. 9.3 also has
bogus behavior, but only for that specific escape value, so given the lack
of field complaints it seems better to leave 9.3 alone.
|
|
|
|
|
|
|
|
| |
Using the new interface MemoryContextAllocExtended, callers can
specify MCXT_ALLOC_NO_OOM if they are prepared to handle a NULL
return value.
Michael Paquier, reviewed and somewhat revised by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
calc_rangesel() failed outright when comparing range variables to empty
constant ranges with < or >=, as a result of missing cases in a switch.
It also produced a bogus estimate for > comparison to an empty range.
On top of that, the >= and > cases were mislabeled throughout. For
nonempty constant ranges, they managed to produce the right answers
anyway as a result of counterbalancing typos.
Also, default_range_selectivity() omitted cases for elem <@ range,
range &< range, and range &> range, so that rather dubious defaults
were applied for these operators.
In passing, rearrange the code in rangesel() so that the elem <@ range
case is handled in a less opaque fashion.
Report and patch by Emre Hasegeli, some additional work by me
|
|
|
|
|
|
|
|
| |
This potentially allows us to add mcxt.c interfaces that do something
other than throw an error when memory cannot be allocated. We'll
handle adding those interfaces in a separate commit.
Michael Paquier, with minor changes by me
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While building error messages to return to the user,
BuildIndexValueDescription, ExecBuildSlotValueDescription and
ri_ReportViolation would happily include the entire key or entire row in
the result returned to the user, even if the user didn't have access to
view all of the columns being included.
Instead, include only those columns which the user is providing or which
the user has select rights on. If the user does not have any rights
to view the table or any of the columns involved then no detail is
provided and a NULL value is returned from BuildIndexValueDescription
and ExecBuildSlotValueDescription. Note that, for key cases, the user
must have access to all of the columns for the key to be shown; a
partial key will not be returned.
Further, in master only, do not return any data for cases where row
security is enabled on the relation and row security should be applied
for the user. This required a bit of refactoring and moving of things
around related to RLS- note the addition of utils/misc/rls.c.
Back-patch all the way, as column-level privileges are now in all
supported versions.
This has been assigned CVE-2014-8161, but since the issue and the patch
have already been publicized on pgsql-hackers, there's no point in trying
to hide this commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 145343534c153d1e6c3cff1fa1855787684d9a38 arranged to store numeric
NaN values as short-header numerics, but the field access macros did not
get the memo: they thought only "SHORT" numerics have short headers.
Most of the time this makes no difference because we don't access the
weight or dscale of a NaN; but numeric_send does that. As pointed out
by Andrew Gierth, this led to fetching uninitialized bytes.
AFAICS this could not have any worse consequences than that; in particular,
an unaligned stored numeric would have been detoasted by PG_GETARG_NUMERIC,
so that there's no risk of a fetch off the end of memory. Still, the code
is wrong on its own terms, and it's not hard to foresee future changes that
might expose us to real risks. So back-patch to all affected branches.
|
|
|
|
|
|
|
|
| |
Commit 1be4eb1b2d436d1375899c74e4c74486890d8777 disabled this, but I
think the real problem here was fixed by commit
b181a91981203f6ec9403115a2917bd3f9473707 and commit
d060e07fa919e0eb681e2fa2cfbe63d6c40eb2cf. So let's try re-enabling
it now and see what happens.
|
|
|
|
|
|
|
|
|
|
|
| |
Fix unsafe use of a non-volatile variable in PG_TRY/PG_CATCH in
AlterSystemSetConfigFile(). While at it, clean up a bundle of other
infelicities and outright bugs, including corner-case-incorrect linked list
manipulation, a poorly designed and worse documented parse-and-validate
function (which even included some randomly chosen hard-wired substitutes
for the specified elevel in one code path ... wtf?), direct use of open()
instead of fd.c's facilities, inadequate checking of write()'s return
value, and generally poorly written commentary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix unsafe coding around PG_TRY in RelationBuildRowSecurity: can't change
a variable inside PG_TRY and then use it in PG_CATCH without marking it
"volatile". In this case though it seems saner to avoid that by doing
a single assignment before entering the TRY block.
I started out just intending to fix that, but the more I looked at the
row-security code the more distressed I got. This patch also fixes
incorrect construction of the RowSecurityPolicy cache entries (there was
not sufficient care taken to copy pass-by-ref data into the cache memory
context) and a whole bunch of sloppiness around the definition and use of
pg_policy.polcmd. You can't use nulls in that column because initdb will
mark it NOT NULL --- and I see no particular reason why a null entry would
be a good idea anyway, so changing initdb's behavior is not the right
answer. The internal value of '\0' wouldn't be suitable in a "char" column
either, so after a bit of thought I settled on using '*' to represent ALL.
Chasing those changes down also revealed that somebody wasn't paying
attention to what the underlying values of ACL_UPDATE_CHR etc really were,
and there was a great deal of lackadaiscalness in the catalogs.sgml
documentation for pg_policy and pg_policies too.
This doesn't pretend to be a complete code review for the row-security
stuff, it just fixes the things that were in my face while dealing with
the bugs in RelationBuildRowSecurity.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
strncpy() has a well-deserved reputation for being unsafe, so make an
effort to get rid of nearly all occurrences in HEAD.
A large fraction of the remaining uses were passing length less than or
equal to the known strlen() of the source, in which case no null-padding
can occur and the behavior is equivalent to memcpy(), though doubtless
slower and certainly harder to reason about. So just use memcpy() in
these cases.
In other cases, use either StrNCpy() or strlcpy() as appropriate (depending
on whether padding to the full length of the destination buffer seems
useful).
I left a few strncpy() calls alone in the src/timezone/ code, to keep it
in sync with upstream (the IANA tzcode distribution). There are also a
few such calls in ecpg that could possibly do with more analysis.
AFAICT, none of these changes are more than cosmetic, except for the four
occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength
source leads to a non-null-terminated destination buffer and ensuing
misbehavior. These don't seem like security issues, first because no stack
clobber is possible and second because if your values of sslcert etc are
coming from untrusted sources then you've got problems way worse than this.
Still, it's undesirable to have unpredictable behavior for overlength
inputs, so back-patch those four changes to all active branches.
|
|
|
|
| |
Peter Geoghegan
|
|
|
|
|
|
|
|
|
|
| |
When we write tuples out to disk and read them back in, the abbreviated
keys become non-abbreviated, because the readtup routines don't know
anything about abbreviation. But without this fix, the rest of the
code still thinks the abbreviation-aware compartor should be used,
so chaos ensues.
Report by Andrew Gierth; patch by Peter Geoghegan.
|
|
|
|
|
|
| |
The split between which things need to happen in the C-locale case and
which needed to happen in the locale-aware case was a few bricks short
of a load. Try to fix that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First, when LC_COLLATE = C, bttext_abbrev_convert should use memcpy()
rather than strxfrm() to construct the abbreviated key, because the
authoritative comparator uses memcpy(). If we do anything else here,
we might get inconsistent answers, and the buildfarm says this risk
is not theoretical. It should be faster this way, too.
Second, while I'm looking at bttext_abbrev_convert, convert a needless
use of goto into the loop it's trying to implement into an actual
loop.
Both of the above problems date to the original commit of abbreviated
keys, commit 4ea51cdfe85ceef8afabceb03c446574daa0ac23.
Third, fix a bogus assignment to tss->locale before tss is set up.
That's a new goof in commit b529b65d1bf8537ca7fa024760a9782d7c8b66e5.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to commit 4ea51cdfe85ceef8afabceb03c446574daa0ac23, this function
only had one job, which was to decide whether we could avoid trampolining
through the fmgr layer when performing sort comparisons. As of that
commit, it has a second job, which is to decide whether we can use
abbreviated keys. Unfortunately, those two tasks are somewhat intertwined
in the existing coding, which is likely why neither Peter Geoghegan nor
I noticed prior to commit that this calls pg_newlocale_from_collation() in
cases where it didn't previously. The buildfarm noticed, though.
To fix, rewrite the logic so that the decision as to which comparator to
use is more cleanly separated from the decision about abbreviation.
|
|
|
|
|
|
|
|
|
|
| |
Most of the Windows buildfarm members (bowerbird, hamerkop, currawong,
jacana, brolga) are unhappy with yesterday's abbreviated keys patch,
although there are some (narwhal, frogmouth) that seem OK with it.
Since there's no obvious pattern to explain why some are working and
others are failing, just disable this across-the-board on Windows for
now. This is a bit unfortunate since the optimization will be a big
win in some cases, but we can't leave the buildfarm broken.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit extends the SortSupport infrastructure to allow operator
classes the option to provide abbreviated representations of Datums;
in the case of text, we abbreviate by taking the first few characters
of the strxfrm() blob. If the abbreviated comparison is insufficent
to resolve the comparison, we fall back on the normal comparator.
This can be much faster than the old way of doing sorting if the
first few bytes of the string are usually sufficient to resolve the
comparison.
There is the potential for a performance regression if all of the
strings to be sorted are identical for the first 8+ characters and
differ only in later positions; therefore, the SortSupport machinery
now provides an infrastructure to abort the use of abbreviation if
it appears that abbreviation is producing comparatively few distinct
keys. HyperLogLog, a streaming cardinality estimator, is included in
this commit and used to make that determination for text.
Peter Geoghegan, reviewed by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, a backend will reset it's PGXACT->xmin value when it doesn't
have any registered snapshots left. That covered the common case that a
transaction in read committed mode runs several queries, one after each
other, as there would be no snapshots active between those queries.
However, if you hold cursors across each of the query, we didn't get a
chance to reset xmin.
To make that better, keep all the registered snapshots in a pairing heap,
ordered by xmin so that it's always quick to find the snapshot with the
smallest xmin. That allows us to advance PGXACT->xmin whenever the oldest
snapshot is deregistered, even if there are others still active.
Per discussion originally started by Jeff Davis back in 2009 and more
recently by Robert Haas.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As of 9.3, ruleutils.c goes to some lengths to ensure that table and column
aliases used in its output are unique. Of course this takes more time than
was required before, which in itself isn't fatal. However, EXPLAIN was set
up so that recalculation of the unique aliases was repeated for each
subexpression printed in a plan. That results in O(N^2) time and memory
consumption for large plan trees, which did not happen in older branches.
Fortunately, the expensive work is the same across a whole plan tree,
so there is no need to repeat it; we can do most of the initialization
just once per query and re-use it for each subexpression. This buys
back most (not all) of the performance loss since 9.2.
We need an extra ExplainState field to hold the precalculated deparse
context. That's no problem in HEAD, but in the back branches, expanding
sizeof(ExplainState) seems risky because third-party extensions might
have local variables of that struct type. So, in 9.4 and 9.3, introduce
an auxiliary struct to keep sizeof(ExplainState) the same. We should
refactor the APIs to avoid such local variables in future, but that's
material for a separate HEAD-only commit.
Per gripe from Alexey Bashtanov. Back-patch to 9.3 where the issue
was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To do so, move InitializeLatchSupport() into the new common process
initialization functions, and add a new global variable MyLatch.
MyLatch is usable as soon InitPostmasterChild() has been called
(i.e. very early during startup). Initially it points to a process
local latch that exists in all processes. InitProcess/InitAuxiliaryProcess
then replaces that local latch with PGPROC->procLatch. During shutdown
the reverse happens.
This is primarily advantageous for two reasons: For one it simplifies
dealing with the shared process latch, especially in signal handlers,
because instead of having to check for MyProc, MyLatch can be used
unconditionally. For another, a later patch that makes FEs/BE
communication use latches, now can rely on the existence of a latch,
even before having gone through InitProcess.
Discussion: 20140927191243.GD5423@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
| |
Move common code, that was duplicated in every postmaster child/every
standalone process, into two functions in miscinit.c. Not only does
that already result in a fair amount of net code reduction but it also
makes it much easier to remove more duplication in the future. The
prime motivation wasn't code deduplication though, but easier addition
of new common code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The mechanism added in commit dbdf9679d7d61b03a3bf73af9b095831b7010eb5
for associating the correct translation domain with errcontext strings
potentially fails in cases where errcontext() is used within an ereport()
macro. Such usage was not originally envisioned for errcontext(), but we
do have a few places that do it. In this situation, the intended comma
expression becomes just a couple of arguments to errfinish(), which the
compiler might choose to evaluate right-to-left.
Fortunately, in such cases the textdomain for the errcontext string must
be the same as for the surrounding ereport. So we can fix this by letting
errstart initialize context_domain along with domain; then it will have
the correct value no matter which order the calls occur in. (Note that
error stack callback functions are not invoked until errfinish, so normal
usage of errcontext won't affect what happens for errcontext calls within
the ereport macro.)
In passing, make sure that errcontext calls within the main backend set
context_domain to something non-NULL. This isn't a live bug because
NULL would select the current textdomain() setting which should be the
right thing anyway --- but it seems better to handle this completely
consistently with the regular domain field.
Per report from Dmitry Voronin. Backpatch to 9.3; before that, there
wasn't any attempt to ensure that errcontext strings were translated
in an appropriate domain.
|
|
|
|
|
|
|
|
|
|
| |
Previously, the xml value resulting from an xpath query would not have
namespace declarations if the namespace declarations were attached to
an ancestor element in the input xml value. That means the output value
was not correct XML. Fix that by running the result value through
xmlCopyNode(), which produces the correct namespace declarations.
Author: Ali Akbar <the.apaan@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using a historic snapshot for logical decoding it can validly
happen that a relation that's in the relcache isn't visible to that
historic snapshot. E.g. if a newly created relation is referenced in
the query that uses the SQL interface for logical decoding and a
sinval reset occurs.
The earlier commit that fixed the error handling for that corner case
already improves the situation as a ERROR is better than hitting an
assertion... But it's obviously not good enough. So additionally
allow that case without an error if a historic snapshot is set up -
that won't allow an invalid entry to stay in the cache because it's a)
already marked invalid and will thus be rebuilt during the next access
b) the syscaches will be reset at the end of decoding.
There might be prettier solutions to handle this case, but all that we
could think of so far end up being much more complex than this quite
simple fix.
This fixes the assertion failures reported by the buildfarm (markhor,
tick, leech) after the introduction of new regression tests in
89fd41b390a4. The failure there weren't actually directly caused by
CLOBBER_CACHE_ALWAYS but the extraordinary long runtimes due to it
lead to sinval resets triggering the behaviour.
Discussion: 22459.1418656530@sss.pgh.pa.us
Backpatch to 9.4 where logical decoding was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The corner case where a relcache invalidation tried to rebuild the
entry for a referenced relation but couldn't find it in the catalog
wasn't correct.
The code tried to RelationCacheDelete/RelationDestroyRelation the
entry. That didn't work when assertions are enabled because the latter
contains an assertion ensuring the refcount is zero. It's also more
generally a bad idea, because by virtue of being referenced somebody
might actually look at the entry, which is possible if the error is
trapped and handled via a subtransaction abort.
Instead just error out, without deleting the entry. As the entry is
marked invalid, the worst that can happen is that the invalid (and at
some point unused) entry lingers in the relcache.
Discussion: 22459.1418656530@sss.pgh.pa.us
There should be no way to hit this case < 9.4 where logical decoding
introduced a bug that can hit this. But since the code for handling
the corner case is there it should do something halfway sane, so
backpatch all the the way back. The logical decoding bug will be
handled in a separate commit.
|
|
|
|
| |
Backpatch certain files through 9.0
|
|
|
|
| |
Report by Amit Kapila
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function returns object type and objname/objargs arrays, which can
be passed to pg_get_object_address. This is especially useful because
the textual representation can be copied to a remote server in order to
obtain the corresponding OID-based address. In essence, this function
is the inverse of recently added pg_get_object_address().
Catalog version bumped due to the addition of the new function.
Also add docs to pg_get_object_address.
|
|
|
|
|
|
|
| |
This reverts commit 60838df922345b26a616e49ac9fab808a35d1f85.
That change needs a bit more thought to be workable. In view of
the potentially machine-dependent stuff that went in today,
we need all of the buildfarm to be testing those other changes.
|
|
|
|
|
|
|
| |
Hiding context messages usually is not a good idea - except for rather
verbose debugging/development utensils like LOG_DEBUG. There the
amount of repeated context messages just bloat the log without adding
information.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Exposing compression and decompression APIs of pglz makes possible its
use by extensions and contrib modules. pglz_decompress contained a call
to elog to emit an error message in case of corrupted data. This function
is changed to return a status code to let its callers return an error instead.
This commit is required for upcoming WAL compression feature so that
the WAL reader facility can decompress the WAL data by using pglz_decompress.
Michael Paquier
|
|
|
|
|
|
|
|
|
| |
This reverts commit 1826987a46d079458007b7b6bbcbbd852353adbb.
The overall design was deemed unacceptable, in discussion following the
previous commit message; we might find some parts of it still
salvageable, but I don't want to be on the hook for fixing it, so let's
wait until we have a new patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous representation using a boolean column for each attribute
would not scale as well as we want to add further attributes.
Extra auxilliary functions are added to go along with this change, to
make up for the lost convenience of access of the old representation.
Catalog version bumped due to change in catalogs and the new functions.
Author: Adam Brightwell, minor tweaks by Álvaro
Reviewed by: Stephen Frost, Andres Freund, Álvaro Herrera
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows it to be used with ALTER ROLE SET.
Although the old setting of PGC_BACKEND prevented changes after session
start, after discussion it was more useful to allow ALTER ROLE SET
instead and just document that changes during a session have no effect.
This is similar to how session_preload_libraries works already.
An alternative would be to change things to allow PGC_BACKEND and
PGC_SU_BACKEND settings to be changed by ALTER ROLE SET. But that might
need further research (e.g., log_connections would probably not work).
based on patch by Kyotaro Horiguchi
|
|
|
|
|
| |
We have other general-purpose data structures in src/backend/lib, so it
seems like a better home for the red-black tree as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if you wanted anything besides C-string hash keys, you had to
specify a custom hashing function to hash_create(). Nearly all such
callers were specifying tag_hash or oid_hash; which is tedious, and rather
error-prone, since a caller could easily miss the opportunity to optimize
by using hash_uint32 when appropriate. Replace this with a design whereby
callers using simple binary-data keys just specify HASH_BLOBS and don't
need to mess with specific support functions. hash_create() itself will
take care of optimizing when the key size is four bytes.
This nets out saving a few hundred bytes of code space, and offers
a measurable performance improvement in tidbitmap.c (which was not
exploiting the opportunity to use hash_uint32 for its 4-byte keys).
There might be some wins elsewhere too, I didn't analyze closely.
In future we could look into offering a similar optimized hashing function
for 8-byte keys. Under this design that could be done in a centralized
and machine-independent fashion, whereas getting it right for keys of
platform-dependent sizes would've been notationally painful before.
For the moment, the old way still works fine, so as not to break source
code compatibility for loadable modules. Eventually we might want to
remove tag_hash and friends from the exported API altogether, since there's
no real need for them to be explicitly referenced from outside dynahash.c.
Teodor Sigaev and Tom Lane
|
|
|
|
|
|
|
|
|
|
|
| |
In generate_series_step_numeric(), the variables "start_num"
and "stop_num" may be potentially freed until the next call.
So they should be put in the location which can survive across calls.
But previously they were not, and which could cause incorrect
behavior of generate_series(numeric, numeric). This commit fixes
this problem by copying them on multi_call_memory_ctx.
Andrew Gierth
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mostly these issues concern the non-use of function results. These
have been changed to use (void) pushJsonbValue(...) instead of assigning
the result to a variable that gets overwritten before it is used.
There is a larger issue that we should possibly examine the API for
pushJsonbValue(), so that instead of returning a value it modifies a
state argument. The current idiom is rather clumsy. However, changing
that requires quite a bit more work, so this change should do for the
moment.
|
|
|
|
|
| |
"PG_RETURN_FLOAT8(x)" is not "return x", except perhaps by accident
on some platforms.
|
|
|
|
| |
Alexander Korotkov, reviewed by Emre Hasegeli.
|
|
|
|
|
|
|
|
| |
The code for advancing through the input rows overlooked the case that we
might already be past the first row of the row pair now being considered,
in case the previous percentile also fell between the same two input rows.
Report and patch by Andrew Gierth; logic rewritten a bit for clarity by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The functions are:
to_jsonb()
jsonb_object()
jsonb_build_object()
jsonb_build_array()
jsonb_agg()
jsonb_object_agg()
Also along the way some better logic is implemented in
json_categorize_type() to match that in the newly implemented
jsonb_categorize_type().
Andrew Dunstan, reviewed by Pavel Stehule and Alvaro Herrera.
|
|
|
|
|
|
|
|
| |
The functions remove object fields, including in nested objects, that
have null as a value. In certain cases this can lead to considerably
smaller datums, with no loss of semantic information.
Andrew Dunstan, reviewed by Pavel Stehule.
|
|
|
|
|
|
|
|
|
|
| |
The amount of space to reserve for the value's varlena header is
VARHDRSZ, not sizeof(VARHDRSZ). The latter coding accidentally
failed to fail because of the way the VARHDRSZ macro is currently
defined; but if we ever change it to return size_t (as one might
reasonably expect it to do), convertToJsonb() would have failed.
Spotted by Mark Dilger.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generate a table_rewrite event when ALTER TABLE
attempts to rewrite a table. Provide helper
functions to identify table and reason.
Intended use case is to help assess or to react
to schema changes that might hold exclusive locks
for long periods.
Dimitri Fontaine, triggering an edit by Simon Riggs
Reviewed in detail by Michael Paquier
|
|
|
|
|
|
|
|
|
| |
It was an oversight in the original commit.
Also note in the sample config file that changing wal_log_hints requires a
restart.
Michael Paquier. Backpatch to 9.4, where wal_log_hints was added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Transactions can now set their commit timestamp directly as they commit,
or an external transaction commit timestamp can be fed from an outside
system using the new function TransactionTreeSetCommitTsData(). This
data is crash-safe, and truncated at Xid freeze point, same as pg_clog.
This module is disabled by default because it causes a performance hit,
but can be enabled in postgresql.conf requiring only a server restart.
A new test in src/test/modules is included.
Catalog version bumped due to the new subdirectory within PGDATA and a
couple of new SQL functions.
Authors: Álvaro Herrera and Petr Jelínek
Reviewed to varying degrees by Michael Paquier, Andres Freund, Robert
Haas, Amit Kapila, Fujii Masao, Jaime Casanova, Simon Riggs, Steven
Singer, Peter Eisentraut
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make the error messages issued by array_in() uniformly follow the style
ERROR: malformed array literal: "actual input string"
DETAIL: specific complaint here
and rewrite many of the specific complaints to be clearer.
The immediate motivation for doing this is a complaint from Josh Berkus
that json_to_record() produced an unintelligible error message when
dealing with an array item, because it tries to feed the JSON-format
array value to array_in(). Really it ought to be smart enough to
perform JSON-to-Postgres array conversion, but that's a future feature
not a bug fix. In the meantime, this change is something we agreed
we could back-patch into 9.4, and it should help de-confuse things a bit.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Davide S. reported that json_agg() sometimes produced multiple trailing
right brackets. This turns out to be because json_agg_finalfn() attaches
the final right bracket, and was doing so by modifying the aggregate state
in-place. That's verboten, though unfortunately it seems there's no way
for nodeAgg.c to check for such mistakes.
Fix that back to 9.3 where the broken code was introduced. In 9.4 and
HEAD, likewise fix json_object_agg(), which had copied the erroneous logic.
Make some cosmetic cleanups as well.
|