| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
| |
Apparently, there are bugs in this code that cause it to loop endlessly.
That bug still needs more research, but in the meantime it's clear that
the loop is missing a check for interrupts so that it can be cancelled
timely.
Backpatch to 9.1 -- this has been missing since 49475aab8d0d.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We tried to fetch statistics data from the index metapage, which does not
work if the index isn't actually present. If the index is hypothetical,
instead extrapolate some plausible internal statistics based on the index
page count provided by the index-advisor plugin.
There was already some code in gincostestimate() to invent internal stats
in this way, but since it was only meant as a stopgap for pre-9.1 GIN
indexes that hadn't been vacuumed since upgrading, it was pretty crude.
If we want it to support index advisors, we should try a little harder.
A small amount of testing says that it's better to estimate the entry pages
as 90% of the index, not 100%. Also, estimating the number of entries
(keys) as equal to the heap tuple count could be wildly wrong in either
direction. Instead, let's estimate 100 entries per entry page.
Perhaps someday somebody will want the index advisor to be able to provide
these numbers more directly, but for the moment this should serve.
Problem report and initial patch by Julien Rouhaud; modified by me to
invent less-bogus internal statistics. Back-patch to all supported
branches, since we've supported index advisors since 9.0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous way of reconstructing check constraints was to do a separate
"ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance
hierarchy. However, that way has no hope of reconstructing the check
constraints' own inheritance properties correctly, as pointed out in
bug #13779 from Jan Dirk Zijlstra. What we should do instead is to do
a regular "ALTER TABLE", allowing recursion, at the topmost table that
has a particular constraint, and then suppress the work queue entries
for inherited instances of the constraint.
Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546cf,
but we failed to notice that it wasn't reconstructing the pg_constraint
field values correctly.
As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to
always schema-qualify the target table name; this seems like useful backup
to the protections installed by commit 5f173040.
In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused.
(I could alternatively have modified it to also return conislocal, but that
seemed like a pretty single-purpose API, so let's not pretend it has some
other use.) It's unused in the back branches as well, but I left it in
place just in case some third-party code has decided to use it.
In HEAD/9.5, also rename pg_get_constraintdef_string to
pg_get_constraintdef_command, as the previous name did nothing to explain
what that entry point did differently from others (and its comment was
equally useless). Again, that change doesn't seem like material for
back-patching.
I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well.
Otherwise, back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
| |
div_var_fast() postpones propagating carries in the same way as mul_var(),
so it has the same corner-case overflow risk we fixed in 246693e5ae8a36f0,
namely that the size of the carries has to be accounted for when setting
the threshold for executing a carry propagation step. We've not devised
a test case illustrating the brokenness, but the required fix seems clear
enough. Like the previous fix, back-patch to all active branches.
Dean Rasheed
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 11e131854f8231a21613f834c40fe9d046926387, ruleutils.c has
attempted to ensure that each RTE in a query or plan tree has a unique
alias name. However, the code that was added for this could be quite slow,
even as bad as O(N^3) if N identical RTE names must be replaced, as noted
by Jeff Janes. Improve matters by building a transient hash table within
set_rtable_names. The hash table in itself reduces the cost of detecting a
duplicate from O(N) to O(1), and we can save another factor of N by storing
the number of de-duplicated names already created for each entry, so that
we don't have to re-try names already created. This way is probably a bit
slower overall for small range tables, but almost by definition, such cases
should not be a performance problem.
In principle the same problem applies to the column-name-de-duplication
code; but in practice that seems to be less of a problem, first because
N is limited since we don't support extremely wide tables, and second
because duplicate column names within an RTE are fairly rare, so that in
practice the cost is more like O(N^2) not O(N^3). It would be very much
messier to fix the column-name code, so for now I've left that alone.
An independent problem in the same area was that the de-duplication code
paid no attention to the identifier length limit, and would happily produce
identifiers that were longer than NAMEDATALEN and wouldn't be unique after
truncation to NAMEDATALEN. This could result in dump/reload failures, or
perhaps even views that silently behaved differently than before. We can
fix that by shortening the base name as needed. Fix it for both the
relation and column name cases.
In passing, check for interrupts in set_rtable_names, just in case it's
still slow enough to be an issue.
Back-patch to 9.3 where this code was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normally ruleutils prints a whole-row Var as "foo.*". We already knew that
that doesn't work at top level of a SELECT list, because the parser would
treat the "*" as a directive to expand the reference into separate columns,
not a whole-row Var. However, Joshua Yanovski points out in bug #13776
that the same thing happens at top level of a ROW() construct; and some
nosing around in the parser shows that the same is true in VALUES().
Hence, apply the same workaround already devised for the SELECT-list case,
namely to add a forced cast to the appropriate rowtype in these cases.
(The alternative of just printing "foo" was rejected because it is
difficult to avoid ambiguity against plain columns named "foo".)
Back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The jsonb_path_ops code calculated hash values inconsistently in some cases
involving nested arrays and objects. This would result in queries possibly
not finding entries that they should find, when using a jsonb_path_ops GIN
index for the search. The problem cases involve JSONB values that contain
both scalars and sub-objects at the same nesting level, for example an
array containing both scalars and sub-arrays. To fix, reset the current
stack->hash after processing each value or sub-object, not before; and
don't try to be cute about the outermost level's initial hash.
Correcting this means that existing jsonb_path_ops indexes may now be
inconsistent with the new hash calculation code. The symptom is the same
--- searches not finding entries they should find --- but the specific
rows affected are likely to be different. Users will need to REINDEX
jsonb_path_ops indexes to make sure that all searches work as expected.
Per bug #13756 from Daniel Cheng. Back-patch to 9.4 where the faulty
logic was introduced.
|
|
|
|
|
| |
Message style, plurals, quoting, spelling, consistency with similar
messages
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit bda76c1c8cfb1d11751ba6be88f0242850481733 caused both plus and
minus infinity to be rendered as "infinity", which is not only wrong
but inconsistent with the pre-9.4 behavior of to_json(). Fix that by
duplicating the coding in date_out/timestamp_out/timestamptz_out more
closely. Per bug #13687 from Stepan Perlov. Back-patch to 9.4, like
the previous commit.
In passing, also re-pgindent json.c, since it had gotten a bit messed up by
recent patches (and I was already annoyed by indentation-related problems
in back-patching this fix ...)
|
|
|
|
|
|
|
|
|
|
|
| |
The function failed to adhere to its specification that the "tcategory"
argument should not be examined when the input value is NULL. This
resulted in a crash in some cases. Per bug #13680 from Boyko Yordanov.
In passing, re-pgindent some recent changes in jsonb.c, and fix a rather
ungrammatical comment.
Diagnosis and patch by Michael Paquier, cosmetic changes by me
|
|
|
|
|
|
| |
Many functions stored JsonbIteratorToken values in variables of other
integer types. Also, standardize order relative to other declarations.
Expect compilers to generate the same code before and after this change.
|
|
|
|
|
|
| |
The tsquery, ltxtquery and query_int data types have a common ancestor.
Having acquired check_stack_depth() calls independently, each was
missing at least one call. Back-patch to 9.0 (all supported versions).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A range type can name another range type as its subtype, and a record
type can bear a column of another record type. Consequently, functions
like range_cmp() and record_recv() are recursive. Functions at risk
include operator family members and referents of pg_type regproc
columns. Treat as recursive any such function that looks up and calls
the same-purpose function for a record column type or the range subtype.
Back-patch to 9.0 (all supported versions).
An array type's element type is never itself an array type, so array
functions are unaffected. Recursion depth proportional to array
dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sufficiently-deep recursion heretofore elicited a SIGSEGV. If an
application constructs PostgreSQL json or jsonb values from arbitrary
user input, application users could have exploited this to terminate all
active database connections. That applies to 9.3, where the json parser
adopted recursive descent, and later versions. Only row_to_json() and
array_to_json() were at risk in 9.2, both in a non-security capacity.
Back-patch to 9.2, where the json type was introduced.
Oskari Saarenmaa, reviewed by Michael Paquier.
Security: CVE-2015-5289
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To allow users to force RLS to always be applied, even for table owners,
add ALTER TABLE .. FORCE ROW LEVEL SECURITY.
row_security=off overrides FORCE ROW LEVEL SECURITY, to ensure pg_dump
output is complete (by default).
Also add SECURITY_NOFORCE_RLS context to avoid data corruption when
ALTER TABLE .. FORCE ROW SECURITY is being used. The
SECURITY_NOFORCE_RLS security context is used only during referential
integrity checks and is only considered in check_enable_rls() after we
have already checked that the current user is the owner of the relation
(which should always be the case during referential integrity checks).
Back-patch to 9.5 where RLS was added.
|
|
|
|
|
|
|
|
|
| |
Null path elements and, where the object is an array, invalid integer
elements now cause an error.
Incorrect behaviour noted by Thom Brown, patch from Dmitry Dolgov.
Backpatch to 9.5 where jsonb_set was introduced
|
|
|
|
|
| |
Since MatchText() recurses, it could in principle be driven to stack
overflow, although quite a long pattern would be needed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces ill-fated commit 5ddc72887a012f6a8b85707ef27d85c274faf53d,
which was reverted because it broke active uses of FK cache entries. In
this patch, we still do nothing more to invalidatable cache entries than
mark them as needing revalidation, so we won't break active uses. To keep
down the overhead of InvalidateConstraintCacheCallBack(), keep a list of
just the currently-valid cache entries. (The entries are large enough that
some added space for list links doesn't seem like a big problem.) This
would still be O(N^2) when there are many valid entries, though, so when
the list gets too long, just force the "sinval reset" behavior to remove
everything from the list. I set the threshold at 1000 entries, somewhat
arbitrarily. Possibly that could be fine-tuned later. Another item for
future study is whether it's worth adding reference counting so that we
could safely remove invalidated entries. As-is, problem cases are likely
to end up with large and mostly invalid FK caches.
Like the previous attempt, backpatch to 9.3.
Jan Wieck and Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, a function call appearing at the top level of WHERE had a
hard-wired selectivity estimate of 0.3333333, a kludge conveniently dated
in the source code itself to July 1992. The expectation at the time was
that somebody would soon implement estimator support functions analogous
to those for operators; but no such code has appeared, nor does it seem
likely to in the near future. We do have an alternative solution though,
at least for immutable functions on single relations: creating an
expression index on the function call will allow ANALYZE to gather stats
about the function's selectivity. But the code in clause_selectivity()
failed to make use of such data even if it exists.
Refactor so that that will happen. I chose to make it try this technique
for any clause type for which clause_selectivity() doesn't have a special
case, not just functions. To avoid adding unnecessary overhead in the
common case where we don't learn anything new, make selfuncs.c provide an
API that hooks directly to examine_variable() and then var_eq_const(),
rather than the previous coding which laboriously constructed an OpExpr
only so that it could be expensively deconstructed again.
I preserved the behavior that the default estimate for a function call
is 0.3333333. (For any other expression node type, it's 0.5, as before.)
I had originally thought to make the default be 0.5 across the board, but
changing a default estimate that's survived for twenty-three years seems
like something not to do without a lot more testing than I care to put
into it right now.
Per a complaint from Jehan-Guillaume de Rorthais. Back-patch into 9.5,
but not further, at least for the moment.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mul_var() postpones propagating carries until it risks overflow in its
internal digit array. However, the logic failed to account for the
possibility of overflow in the carry propagation step, allowing wrong
results to be generated in corner cases. We must slightly reduce the
when-to-propagate-carries threshold to avoid that.
Discovered and fixed by Dean Rasheed, with small adjustments by me.
This has been wrong since commit d72f6c75038d8d37e64a29a04b911f728044d83b,
so back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
| |
This commit's parent made superfluous the bit's sole usage. Referential
integrity checks have long run as the subject table's owner, and that
now implies RLS bypass. Safe use of the bit was tricky, requiring
strict control over the SQL expressions evaluating therein. Back-patch
to 9.5, where the bit was introduced.
Based on a patch by Stephen Frost.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These functions have been looking up type info for every row they
process. Instead of doing that we only look them up the first time
through and stash the information in the aggregate state object.
Affects json_agg, json_object_agg, jsonb_agg and jsonb_object_agg.
There is plenty more work to do in making these more efficient,
especially the jsonb functions, but this is a virtually cost free
improvement that can be done right away.
Backpatch to 9.5 where the jsonb variants were introduced.
|
|
|
|
|
|
|
|
| |
Commit 5ddc72887a012f6a8b85707ef27d85c274faf53d does not actually work
because it will happily blow away ri_constraint_cache entries that are
in active use in outer call levels. In any case, it's a very ugly,
brute-force solution to the problem of limiting the cache size.
Revert until it can be redesigned.
|
|
|
|
|
|
|
|
|
|
|
| |
To prevent perverse results, we now only return the other operand if
it's not scalar, and if both operands are of the same kind (array or
object).
Original bug complaint and patch from Oskari Saarenmaa, extended by me
to cover the cases of different kinds of jsonb.
Backpatch to 9.5 where jsonb_concat was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 45ba424f improved foreign key lookups during bulk updates
when the FK value does not change. When restoring a schema dump
from a database with many (say 100,000) foreign keys, this cache
would grow very big and every ALTER TABLE command was causing an
InvalidateConstraintCacheCallBack(), which uses a sequential hash
table scan. This could cause a severe performance regression in
restoring a schema dump (including during pg_upgrade).
The patch uses a heuristic method of detecting when the hash table
should be destroyed and recreated.
InvalidateConstraintCacheCallBack() adds the current size of the
hash table to a counter. When that sum reaches 1,000,000, the hash
table is flushed. This fixes the regression without noticeable
harm to the bulk update use case.
Jan Wieck
Backpatch to 9.3 where the performance regression was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RESERV. RESERV is meant for tokens like "now" and having them in that
category throws errors like these when used as an input date:
stark=# SELECT 'doy'::timestamptz;
ERROR: unexpected dtype 33 while parsing timestamptz "doy"
LINE 1: SELECT 'doy'::timestamptz;
^
stark=# SELECT 'dow'::timestamptz;
ERROR: unexpected dtype 32 while parsing timestamptz "dow"
LINE 1: SELECT 'dow'::timestamptz;
^
Found by LLVM's Libfuzzer
|
|
|
|
| |
Oskari Saarenmaa. Backpatch to stable branches where applicable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we have the typmod that identifies a registered record type, there's no
reason that record_in() should refuse to perform input conversion for it.
Now, in direct SQL usage, record_in() will always be passed typmod = -1
with type OID RECORDOID, because no typmodin exists for type RECORD, so the
case can't arise. However, some InputFunctionCall users such as PLs may be
able to supply the right typmod, so we should allow this to support them.
Note: the previous coding and comment here predate commit 59c016aa9f490b53.
There has been no case since 8.1 in which the passed type OID wouldn't be
valid; and if it weren't, this error message wouldn't be apropos anyway.
Better to let lookup_rowtype_tupdesc complain about it.
Back-patch to 9.1, as this is necessary for my upcoming plpython fix.
I'm committing it separately just to make it a bit more visible in the
commit history.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's against project policy to use elog() for user-facing errors, or to
omit an errcode() selection for errors that aren't supposed to be "can't
happen" cases. Fix all the violations of this policy that result in
ERRCODE_INTERNAL_ERROR log entries during the standard regression tests,
as errors that can reliably be triggered from SQL surely should be
considered user-facing.
I also looked through all the files touched by this commit and fixed
other nearby problems of the same ilk. I do not claim to have fixed
all violations of the policy, just the ones in these files.
In a few places I also changed existing ERRCODE choices that didn't
seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR
by something more specific.
Back-patch to 9.5, but no further; changing ERRCODE assignments in
stable branches doesn't seem like a good idea.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As in commit 0a52d378b03b7d5a, avoid doing something that has undefined
results according to the C standard, even though in practice there does
not seem to be any problem with it.
This fixes two places in numeric.c that demonstrably could call memcpy()
with such arguments. I looked through that file and didn't see any other
places with similar hazards; this is not to claim that there are not such
places in other files.
Per report from Piotr Stefaniak. Back-patch to 9.5 which is where the
previous commit was added. We're more or less setting a precedent that
we will not worry about this type of issue in pre-9.5 branches unless
someone demonstrates a problem in the field.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although I think on all modern machines floating division by zero
results in Infinity not SIGFPE, we still don't want infinities
running around in the planner's costing estimates; too much risk
of that leading to insane behavior.
grouping_planner() failed to consider the possibility that final_rel
might be known dummy and hence have zero rowcount. (I wonder if it
would be better to set a rows estimate of 1 for dummy relations?
But at least in the back branches, changing this convention seems
like a bad idea, so I'll leave that for another day.)
Make certain that get_variable_numdistinct() produces a nonzero result.
The case that can be shown to be broken is with stadistinct < 0.0 and
small ntuples; we did not prevent the result from rounding to zero.
For good luck I applied clamp_row_est() to all the nonconstant return
values.
In ExecChooseHashTableSize(), Assert that we compute positive nbuckets
and nbatch. I know of no reason to think this isn't the case, but it
seems like a good safety check.
Per reports from Piotr Stefaniak. Back-patch to all active branches.
|
|
|
|
|
|
|
|
| |
The previous code resulted in memory access beyond the path bounds. The
cure is to move it into a code branch that checks the value of lex_level
is within the correct bounds.
Bug reported and diagnosed by Piotr Stefaniak.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pg_stats view is supposed to be restricted to only show rows
about tables the user can read. However, it sometimes can leak
information which could not otherwise be seen when row level security
is enabled. Fix that by not showing pg_stats rows to users that would
be subject to RLS on the table the row is related to. This is done
by creating/using the newly introduced SQL visible function,
row_security_active().
Along the way, clean up three call sites of check_enable_rls(). The second
argument of that function should only be specified as other than
InvalidOid when we are checking as a different user than the current one,
as in when querying through a view. These sites were passing GetUserId()
instead of InvalidOid, which can cause the function to return incorrect
results if the current user has the BYPASSRLS privilege and row_security
has been set to OFF.
Additionally fix a bug causing RI Trigger error messages to unintentionally
leak information when RLS is enabled, and other minor cleanup and
improvements. Also add WITH (security_barrier) to the definition of pg_stats.
Bumped CATVERSION due to new SQL functions and pg_stats view definition.
Back-patch to 9.5 where RLS was introduced. Reported by Yaroslav.
Patch by Joe Conway and Dean Rasheed with review and input by
Michael Paquier and Stephen Frost.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM. (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.) Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type. This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.
Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.
Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.
Back-patch to 9.5 so that we don't have to support the original API
in production.
|
|
|
|
|
|
|
|
|
| |
The wrong is_null flag was being passed to datum_to_json. Also, null
object key values are not permitted, and this was not being checked
for. Add regression tests covering these cases, and also add those tests
to the json set, even though it was doing the right thing.
Fixes bug #13514, initially diagnosed by Tom Lane.
|
|
|
|
| |
Defect noticed by Coverity.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, there was an inconsistency across json/jsonb operators that
operate on datums containing JSON arrays -- only some operators
supported negative array count-from-the-end subscripting. Specifically,
only a new-to-9.5 jsonb deletion operator had support (the new "jsonb -
integer" operator). This inconsistency seemed likely to be
counter-intuitive to users. To fix, allow all places where the user can
supply an integer subscript to accept a negative subscript value,
including path-orientated operators and functions, as well as other
extraction operators. This will need to be called out as an
incompatibility in the 9.5 release notes, since it's possible that users
are relying on certain established extraction operators changed here
yielding NULL in the event of a negative subscript.
For the json type, this requires adding a way of cheaply getting the
total JSON array element count ahead of time when parsing arrays with a
negative subscript involved, necessitating an ad-hoc lex and parse.
This is followed by a "conversion" from a negative subscript to its
equivalent positive-wise value using the count. From there on, it's as
if a positive-wise value was originally provided.
Note that there is still a minor inconsistency here across jsonb
deletion operators. Unlike the aforementioned new "-" deletion operator
that accepts an integer on its right hand side, the new "#-" path
orientated deletion variant does not throw an error when it appears like
an array subscript (input that could be recognized by as an integer
literal) is being used on an object, which is wrong-headed. The reason
for not being stricter is that it could be the case that an object pair
happens to have a key value that looks like an integer; in general,
these two possibilities are impossible to differentiate with rhs path
text[] argument elements. However, we still don't allow the "#-"
path-orientated deletion operator to perform array-style subscripting.
Rather, we just return the original left operand value in the event of a
negative subscript (which seems analogous to how the established
"jsonb/json #> text[]" path-orientated operator may yield NULL in the
event of an invalid subscript).
In passing, make SetArrayPath() stricter about not accepting cases where
there is trailing non-numeric garbage bytes rather than a clean NUL
byte. This means, for example, that strings like "10e10" are now not
accepted as an array subscript of 10 by some new-to-9.5 path-orientated
jsonb operators (e.g. the new #- operator). Finally, remove dead code
for jsonb subscript deletion; arguably, this should have been done in
commit b81c7b409.
Peter Geoghegan and Andrew Dunstan
|
|
|
|
|
|
|
| |
This formalizes a decision implicit in commit
4ea51cdfe85ceef8afabceb03c446574daa0ac23 and adds clean detection of
affected systems. Vendor updates are available for each such known bug.
Back-patch to 9.5, where the aforementioned commit first appeared.
|
|
|
|
|
|
| |
Patch by David Rowley. Backpatch to 9.5, as some of the calls were new in
9.5, and keeping the code in sync with master makes future backpatching
easier.
|
|
|
|
|
|
|
| |
Without this, we might access memory that's already been freed, or
leak memory if in the C locale.
Peter Geoghegan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes it possible to use the functions without getting errors, if there
is a chance that the file might be removed or renamed concurrently.
pg_rewind needs to do just that, although this could be useful for other
purposes too. (The changes to pg_rewind to use these functions will come in
a separate commit.)
The read_binary_file() function isn't very well-suited for extensions.c's
purposes anymore, if it ever was. So bite the bullet and make a copy of it
in extension.c, tailored for that use case. This seems better than the
accidental code reuse, even if it's a some more lines of code.
Michael Paquier, with plenty of kibitzing by me.
|
|
|
|
|
|
| |
The unit of measure is microseconds, not milliseconds.
Backpatch to 9.3 where the function and its comment were added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A few places assumed they could pass NULL for the argtypes array when
looking up functions known to have zero arguments. At first glance
it seems that this should be safe enough, since memcmp() is surely not
allowed to fetch any bytes if its count argument is zero. However,
close reading of the C standard says that such calls have undefined
behavior, so we'd probably best avoid it.
Since the number of places doing this is quite small, and some other
places looking up zero-argument functions were already passing dummy
arrays, let's standardize on the latter solution rather than hacking
the function lookup code to avoid calling memcmp() in these cases.
I also added Asserts to catch any future violations of the new rule.
Given the utter lack of any evidence that this actually causes any
problems in the field, I don't feel a need to back-patch this change.
Per report from Piotr Stefaniak, though this is not his patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
POSIX permits setlocale() calls to invalidate any previous setlocale()
return values, but commit 5f538ad004aa00cf0881f179f0cde789aad4f47e
neglected to account for setlocale(LC_CTYPE, NULL) doing so. The effect
was to set the LC_CTYPE environment variable to an unintended value.
pg_perm_setlocale() sets this variable to assist PL/Perl; without it,
Perl would undo PostgreSQL's locale settings. The known-affected
configurations are 32-bit, release builds using Visual Studio 2012 or
Visual Studio 2013. Visual Studio 2010 is unaffected, as were all
buildfarm-attested configurations. In principle, this bug could leave
the wrong LC_CTYPE in effect after PL/Perl use, which could in turn
facilitate problems like corrupt tsvector datums. No known platform
experiences that consequence, because PL/Perl on Windows does not use
this environment variable.
The bug has been user-visible, as early postmaster failure, on systems
with Windows ANSI code page set to CP936 for "Chinese (Simplified, PRC)"
and probably on systems using other multibyte code pages.
(SetEnvironmentVariable() rejects values containing character data not
valid under the Windows ANSI code page.) Back-patch to 9.4, where the
faulty commit first appeared.
Reported by Didi Hu and 林鹏程. Reviewed by Tom Lane, though this fix
strategy was not his first choice.
|
|
|
|
|
| |
This reverts commit b76e76be460a240e99c33f6fb470dd1d5fe01a2a. The
buildfarm yielded no related failures.
|
|
|
|
|
|
|
|
| |
POSIX permits setlocale() calls to invalidate any previous setlocale()
return values. Commit 5f538ad004aa00cf0881f179f0cde789aad4f47e
neglected to account for that. In advance of fixing that bug, switch to
failing hard on affected configurations. This is a planned temporary
commit to assay buildfarm-represented configurations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
jsonb_set() and other clients of the setPathArray() utility function
could get spurious results when an array integer subscript is provided
that is not within the range of int.
To fix, ensure that the value returned by strtol() within setPathArray()
is within the range of int; when it isn't, assume an invalid input in
line with existing, similar cases. The path-orientated operators that
appeared in PostgreSQL 9.3 and 9.4 do not call setPathArray(), and
already independently take this precaution, so no change there.
Peter Geoghegan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Supporting deletion of JSON pairs within jsonb objects using an
array-style integer subscript allowed for surprising outcomes. This was
mostly due to the implementation-defined ordering of pairs within
objects for jsonb.
It also seems desirable to make jsonb integer subscript deletion
consistent with the 9.4 era general purpose integer subscripting
operator for jsonb (although that operator returns NULL when an object
is encountered, while we prefer here to throw an error).
Peter Geoghegan, following discussion on -hackers.
|