aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils
Commit message (Collapse)AuthorAge
* Add missing CHECK_FOR_INTERRUPTS in lseg_inside_polyAlvaro Herrera2015-12-14
| | | | | | | | | Apparently, there are bugs in this code that cause it to loop endlessly. That bug still needs more research, but in the meantime it's clear that the loop is missing a check for interrupts so that it can be cancelled timely. Backpatch to 9.1 -- this has been missing since 49475aab8d0d.
* Improve some messagesPeter Eisentraut2015-12-10
|
* Make gincostestimate() cope with hypothetical GIN indexes.Tom Lane2015-12-01
| | | | | | | | | | | | | | | | | | | | | | | We tried to fetch statistics data from the index metapage, which does not work if the index isn't actually present. If the index is hypothetical, instead extrapolate some plausible internal statistics based on the index page count provided by the index-advisor plugin. There was already some code in gincostestimate() to invent internal stats in this way, but since it was only meant as a stopgap for pre-9.1 GIN indexes that hadn't been vacuumed since upgrading, it was pretty crude. If we want it to support index advisors, we should try a little harder. A small amount of testing says that it's better to estimate the entry pages as 90% of the index, not 100%. Also, estimating the number of entries (keys) as equal to the heap tuple count could be wildly wrong in either direction. Instead, let's estimate 100 entries per entry page. Perhaps someday somebody will want the index advisor to be able to provide these numbers more directly, but for the moment this should serve. Problem report and initial patch by Julien Rouhaud; modified by me to invent less-bogus internal statistics. Back-patch to all supported branches, since we've supported index advisors since 9.0.
* Avoid caching expression state trees for domain constraints across queries.Tom Lane2015-11-29
| | | | | | | | | | | | | | | | | | | | | | In commit 8abb3cda0ddc00a0ab98977a1633a95b97068d4e I attempted to cache the expression state trees constructed for domain CHECK constraints for the life of the backend (assuming the domain's constraints don't get redefined). However, this turns out not to work very well, because execQual.c will run those state trees with ecxt_per_query_memory pointing to a query-lifespan context, and in some situations we'll end up with pointers into that context getting stored into the state trees. This happens in particular with SQL-language functions, as reported by Emre Hasegeli, but there are many other cases. To fix, keep only the expression plan trees for domain CHECK constraints in the typcache's data structure, and revert to performing ExecInitExpr (at least) once per query to set up expression state trees in the query's context. Eventually it'd be nice to undo this, but that will require some careful thought about memory management for expression state trees, and it seems far too late for any such redesign in 9.5. This way is still much more efficient than what happened before 8abb3cda0.
* Fix failure to consider failure cases in GetComboCommandId().Tom Lane2015-11-26
| | | | | | | | | | | | Failure to initially palloc the comboCids array, or to realloc it bigger when needed, left combocid's data structures in an inconsistent state that would cause trouble if the top transaction continues to execute. Noted while examining a user complaint about the amount of memory used for this. (There's not much we can do about that, but it does point up that repalloc failure has a non-negligible chance of occurring here.) In HEAD/9.5, also avoid possible invocation of memcpy() with a null pointer in SerializeComboCIDState; cf commit 13bba0227.
* Fix handling of inherited check constraints in ALTER COLUMN TYPE (again).Tom Lane2015-11-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous way of reconstructing check constraints was to do a separate "ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance hierarchy. However, that way has no hope of reconstructing the check constraints' own inheritance properties correctly, as pointed out in bug #13779 from Jan Dirk Zijlstra. What we should do instead is to do a regular "ALTER TABLE", allowing recursion, at the topmost table that has a particular constraint, and then suppress the work queue entries for inherited instances of the constraint. Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546cf, but we failed to notice that it wasn't reconstructing the pg_constraint field values correctly. As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to always schema-qualify the target table name; this seems like useful backup to the protections installed by commit 5f173040. In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused. (I could alternatively have modified it to also return conislocal, but that seemed like a pretty single-purpose API, so let's not pretend it has some other use.) It's unused in the back branches as well, but I left it in place just in case some third-party code has decided to use it. In HEAD/9.5, also rename pg_get_constraintdef_string to pg_get_constraintdef_command, as the previous name did nothing to explain what that entry point did differently from others (and its comment was equally useless). Again, that change doesn't seem like material for back-patching. I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well. Otherwise, back-patch to all supported branches.
* Fix possible internal overflow in numeric division.Tom Lane2015-11-17
| | | | | | | | | | | div_var_fast() postpones propagating carries in the same way as mul_var(), so it has the same corner-case overflow risk we fixed in 246693e5ae8a36f0, namely that the size of the carries has to be accounted for when setting the threshold for executing a carry propagation step. We've not devised a test case illustrating the brokenness, but the required fix seems clear enough. Like the previous fix, back-patch to all active branches. Dean Rasheed
* Message improvementsPeter Eisentraut2015-11-16
|
* Speed up ruleutils' name de-duplication code, and fix overlength-name case.Tom Lane2015-11-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 11e131854f8231a21613f834c40fe9d046926387, ruleutils.c has attempted to ensure that each RTE in a query or plan tree has a unique alias name. However, the code that was added for this could be quite slow, even as bad as O(N^3) if N identical RTE names must be replaced, as noted by Jeff Janes. Improve matters by building a transient hash table within set_rtable_names. The hash table in itself reduces the cost of detecting a duplicate from O(N) to O(1), and we can save another factor of N by storing the number of de-duplicated names already created for each entry, so that we don't have to re-try names already created. This way is probably a bit slower overall for small range tables, but almost by definition, such cases should not be a performance problem. In principle the same problem applies to the column-name-de-duplication code; but in practice that seems to be less of a problem, first because N is limited since we don't support extremely wide tables, and second because duplicate column names within an RTE are fairly rare, so that in practice the cost is more like O(N^2) not O(N^3). It would be very much messier to fix the column-name code, so for now I've left that alone. An independent problem in the same area was that the de-duplication code paid no attention to the identifier length limit, and would happily produce identifiers that were longer than NAMEDATALEN and wouldn't be unique after truncation to NAMEDATALEN. This could result in dump/reload failures, or perhaps even views that silently behaved differently than before. We can fix that by shortening the base name as needed. Fix it for both the relation and column name cases. In passing, check for interrupts in set_rtable_names, just in case it's still slow enough to be an issue. Back-patch to 9.3 where this code was introduced.
* Fix ruleutils.c's dumping of whole-row Vars in ROW() and VALUES() contexts.Tom Lane2015-11-15
| | | | | | | | | | | | | | | Normally ruleutils prints a whole-row Var as "foo.*". We already knew that that doesn't work at top level of a SELECT list, because the parser would treat the "*" as a directive to expand the reference into separate columns, not a whole-row Var. However, Joshua Yanovski points out in bug #13776 that the same thing happens at top level of a ROW() construct; and some nosing around in the parser shows that the same is true in VALUES(). Hence, apply the same workaround already devised for the SELECT-list case, namely to add a forced cast to the appropriate rowtype in these cases. (The alternative of just printing "foo" was rejected because it is difficult to avoid ambiguity against plain columns named "foo".) Back-patch to all supported branches.
* Fix erroneous hash calculations in gin_extract_jsonb_path().Tom Lane2015-11-05
| | | | | | | | | | | | | | | | | | | | The jsonb_path_ops code calculated hash values inconsistently in some cases involving nested arrays and objects. This would result in queries possibly not finding entries that they should find, when using a jsonb_path_ops GIN index for the search. The problem cases involve JSONB values that contain both scalars and sub-objects at the same nesting level, for example an array containing both scalars and sub-arrays. To fix, reset the current stack->hash after processing each value or sub-object, not before; and don't try to be cute about the outermost level's initial hash. Correcting this means that existing jsonb_path_ops indexes may now be inconsistent with the new hash calculation code. The symptom is the same --- searches not finding entries they should find --- but the specific rows affected are likely to be different. Users will need to REINDEX jsonb_path_ops indexes to make sure that all searches work as expected. Per bug #13756 from Daniel Cheng. Back-patch to 9.4 where the faulty logic was introduced.
* Improve comments about abbreviation abort.Robert Haas2015-11-03
| | | | Peter Geoghegan
* Message style improvementsPeter Eisentraut2015-10-28
| | | | | Message style, plurals, quoting, spelling, consistency with similar messages
* Fix incorrect translation of minus-infinity datetimes for json/jsonb.Tom Lane2015-10-20
| | | | | | | | | | | | | Commit bda76c1c8cfb1d11751ba6be88f0242850481733 caused both plus and minus infinity to be rendered as "infinity", which is not only wrong but inconsistent with the pre-9.4 behavior of to_json(). Fix that by duplicating the coding in date_out/timestamp_out/timestamptz_out more closely. Per bug #13687 from Stepan Perlov. Back-patch to 9.4, like the previous commit. In passing, also re-pgindent json.c, since it had gotten a bit messed up by recent patches (and I was already annoyed by indentation-related problems in back-patching this fix ...)
* Put back ssl_renegotiation_limit parameter, but only allow 0.Robert Haas2015-10-20
| | | | | | | | | | | | Per a report from Shay Rojansky, Npgsql sends ssl_renegotiation_limit=0 in the startup packet because it does not support renegotiation; other clients which have not attempted to support renegotiation might well behave similarly. The recent removal of this parameter forces them to break compatibility with either current PostgreSQL versions, or previous ones. Per discussion, the best solution is to accept the parameter but only allow a value of 0. Shay Rojansky, edited a little by me.
* Fix NULL handling in datum_to_jsonb().Tom Lane2015-10-15
| | | | | | | | | | | The function failed to adhere to its specification that the "tcategory" argument should not be examined when the input value is NULL. This resulted in a crash in some cases. Per bug #13680 from Boyko Yordanov. In passing, re-pgindent some recent changes in jsonb.c, and fix a rather ungrammatical comment. Diagnosis and patch by Michael Paquier, cosmetic changes by me
* Use JsonbIteratorToken consistently in automatic variable declarations.Noah Misch2015-10-12
| | | | | | Many functions stored JsonbIteratorToken values in variables of other integer types. Also, standardize order relative to other declarations. Expect compilers to generate the same code before and after this change.
* Perform an immediate shutdown if the postmaster.pid file is removed.Tom Lane2015-10-06
| | | | | | | | | | | | | | | | | | | | | | | | The postmaster now checks every minute or so (worst case, at most two minutes) that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had received SIGQUIT. The original goal behind this change was to ensure that failed buildfarm runs would get fully cleaned up, even if the test scripts had left a postmaster running, which is not an infrequent occurrence. When the buildfarm script removes a test postmaster's $PGDATA directory, its next check on postmaster.pid will fail and cause it to exit. Previously, manual intervention was often needed to get rid of such orphaned postmasters, since they'd block new test postmasters from obtaining the expected socket address. However, by checking postmaster.pid and not something else, we can provide additional robustness: manual removal of postmaster.pid is a frequent DBA mistake, and now we can at least limit the damage that will ensue if a new postmaster is started while the old one is still alive. Back-patch to all supported branches, since we won't get the desired improvement in buildfarm reliability otherwise.
* Prevent stack overflow in query-type functions.Noah Misch2015-10-05
| | | | | | The tsquery, ltxtquery and query_int data types have a common ancestor. Having acquired check_stack_depth() calls independently, each was missing at least one call. Back-patch to 9.0 (all supported versions).
* Prevent stack overflow in container-type functions.Noah Misch2015-10-05
| | | | | | | | | | | | | | A range type can name another range type as its subtype, and a record type can bear a column of another record type. Consequently, functions like range_cmp() and record_recv() are recursive. Functions at risk include operator family members and referents of pg_type regproc columns. Treat as recursive any such function that looks up and calls the same-purpose function for a record column type or the range subtype. Back-patch to 9.0 (all supported versions). An array type's element type is never itself an array type, so array functions are unaffected. Recursion depth proportional to array dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM.
* Prevent stack overflow in json-related functions.Noah Misch2015-10-05
| | | | | | | | | | | | | | Sufficiently-deep recursion heretofore elicited a SIGSEGV. If an application constructs PostgreSQL json or jsonb values from arbitrary user input, application users could have exploited this to terminate all active database connections. That applies to 9.3, where the json parser adopted recursive descent, and later versions. Only row_to_json() and array_to_json() were at risk in 9.2, both in a non-security capacity. Back-patch to 9.2, where the json type was introduced. Oskari Saarenmaa, reviewed by Michael Paquier. Security: CVE-2015-5289
* ALTER TABLE .. FORCE ROW LEVEL SECURITYStephen Frost2015-10-04
| | | | | | | | | | | | | | | | | To allow users to force RLS to always be applied, even for table owners, add ALTER TABLE .. FORCE ROW LEVEL SECURITY. row_security=off overrides FORCE ROW LEVEL SECURITY, to ensure pg_dump output is complete (by default). Also add SECURITY_NOFORCE_RLS context to avoid data corruption when ALTER TABLE .. FORCE ROW SECURITY is being used. The SECURITY_NOFORCE_RLS security context is used only during referential integrity checks and is only considered in check_enable_rls() after we have already checked that the current user is the owner of the relation (which should always be the case during referential integrity checks). Back-patch to 9.5 where RLS was added.
* Disallow invalid path elements in jsonb_setAndrew Dunstan2015-10-04
| | | | | | | | | Null path elements and, where the object is an array, invalid integer elements now cause an error. Incorrect behaviour noted by Thom Brown, patch from Dmitry Dolgov. Backpatch to 9.5 where jsonb_set was introduced
* Group cluster_name and update_process_title settings togetherPeter Eisentraut2015-10-04
|
* Make BYPASSRLS behave like superuser RLS bypass.Noah Misch2015-10-03
| | | | | | | | Specifically, make its effect independent from the row_security GUC, and make it affect permission checks pertinent to views the BYPASSRLS role owns. The row_security GUC thereby ceases to change successful-query behavior; it can only make a query fail with an error. Back-patch to 9.5, where BYPASSRLS was introduced.
* Add recursion depth protection to LIKE matching.Tom Lane2015-10-02
| | | | | Since MatchText() recurses, it could in principle be driven to stack overflow, although quite a long pattern would be needed.
* Second try at fixing O(N^2) problem in foreign key references.Tom Lane2015-09-25
| | | | | | | | | | | | | | | | | | | | | This replaces ill-fated commit 5ddc72887a012f6a8b85707ef27d85c274faf53d, which was reverted because it broke active uses of FK cache entries. In this patch, we still do nothing more to invalidatable cache entries than mark them as needing revalidation, so we won't break active uses. To keep down the overhead of InvalidateConstraintCacheCallBack(), keep a list of just the currently-valid cache entries. (The entries are large enough that some added space for list links doesn't seem like a big problem.) This would still be O(N^2) when there are many valid entries, though, so when the list gets too long, just force the "sinval reset" behavior to remove everything from the list. I set the threshold at 1000 entries, somewhat arbitrarily. Possibly that could be fine-tuned later. Another item for future study is whether it's worth adding reference counting so that we could safely remove invalidated entries. As-is, problem cases are likely to end up with large and mostly invalid FK caches. Like the previous attempt, backpatch to 9.3. Jan Wieck and Tom Lane
* Allow planner to use expression-index stats for function calls in WHERE.Tom Lane2015-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, a function call appearing at the top level of WHERE had a hard-wired selectivity estimate of 0.3333333, a kludge conveniently dated in the source code itself to July 1992. The expectation at the time was that somebody would soon implement estimator support functions analogous to those for operators; but no such code has appeared, nor does it seem likely to in the near future. We do have an alternative solution though, at least for immutable functions on single relations: creating an expression index on the function call will allow ANALYZE to gather stats about the function's selectivity. But the code in clause_selectivity() failed to make use of such data even if it exists. Refactor so that that will happen. I chose to make it try this technique for any clause type for which clause_selectivity() doesn't have a special case, not just functions. To avoid adding unnecessary overhead in the common case where we don't learn anything new, make selfuncs.c provide an API that hooks directly to examine_variable() and then var_eq_const(), rather than the previous coding which laboriously constructed an OpExpr only so that it could be expensively deconstructed again. I preserved the behavior that the default estimate for a function call is 0.3333333. (For any other expression node type, it's 0.5, as before.) I had originally thought to make the default be 0.5 across the board, but changing a default estimate that's survived for twenty-three years seems like something not to do without a lot more testing than I care to put into it right now. Per a complaint from Jehan-Guillaume de Rorthais. Back-patch into 9.5, but not further, at least for the moment.
* Lower *_freeze_max_age minimum values.Andres Freund2015-09-24
| | | | | | | | | | | | | | | | | | The old minimum values are rather large, making it time consuming to test related behaviour. Additionally the current limits, especially for multixacts, can be problematic in space-constrained systems. 10000000 multixacts can contain a lot of members. Since there's no good reason for the current limits, lower them a good bit. Setting them to 0 would be a bad idea, triggering endless vacuums, so still retain a limit. While at it fix autovacuum_multixact_freeze_max_age to refer to multixact.c instead of varsup.c. Reviewed-By: Robert Haas Discussion: CA+TgmoYmQPHcrc3GSs7vwvrbTkbcGD9Gik=OztbDGGrovkkEzQ@mail.gmail.com Backpatch: back to 9.0 (in parts)
* Fix possible internal overflow in numeric multiplication.Tom Lane2015-09-21
| | | | | | | | | | | | | mul_var() postpones propagating carries until it risks overflow in its internal digit array. However, the logic failed to account for the possibility of overflow in the carry propagation step, allowing wrong results to be generated in corner cases. We must slightly reduce the when-to-propagate-carries threshold to avoid that. Discovered and fixed by Dean Rasheed, with small adjustments by me. This has been wrong since commit d72f6c75038d8d37e64a29a04b911f728044d83b, so back-patch to all supported branches.
* Remove the SECURITY_ROW_LEVEL_DISABLED security context bit.Noah Misch2015-09-20
| | | | | | | | | | This commit's parent made superfluous the bit's sole usage. Referential integrity checks have long run as the subject table's owner, and that now implies RLS bypass. Safe use of the bit was tricky, requiring strict control over the SQL expressions evaluating therein. Back-patch to 9.5, where the bit was introduced. Based on a patch by Stephen Frost.
* Remove the row_security=force GUC value.Noah Misch2015-09-20
| | | | | | | | | | | | | | | | Every query of a single ENABLE ROW SECURITY table has two meanings, with the row_security GUC selecting between them. With row_security=force available, every function author would have been advised to either set the GUC locally or test both meanings. Non-compliance would have threatened reliability and, for SECURITY DEFINER functions, security. Authors already face an obligation to account for search_path, and we should not mimic that example. With this change, only BYPASSRLS roles need exercise the aforementioned care. Back-patch to 9.5, where the row_security GUC was introduced. Since this narrows the domain of pg_db_role_setting.setconfig and pg_proc.proconfig, one might bump catversion. A row_security=force setting in one of those columns will elicit a clear message, so don't.
* Cache argument type information in json(b) aggregate functions.Andrew Dunstan2015-09-18
| | | | | | | | | | | | | | These functions have been looking up type info for every row they process. Instead of doing that we only look them up the first time through and stash the information in the aggregate state object. Affects json_agg, json_object_agg, jsonb_agg and jsonb_object_agg. There is plenty more work to do in making these more efficient, especially the jsonb functions, but this is a virtually cost free improvement that can be done right away. Backpatch to 9.5 where the jsonb variants were introduced.
* RLS refactoringStephen Frost2015-09-15
| | | | | | | | | | | | | | | | This refactors rewrite/rowsecurity.c to simplify the handling of the default deny case (reducing the number of places where we check for and add the default deny policy from three to one) by splitting up the retrival of the policies from the application of them. This also allowed us to do away with the policy_id field. A policy_name field was added for WithCheckOption policies and is used in error reporting, when available. Patch by Dean Rasheed, with various mostly cosmetic changes by me. Back-patch to 9.5 where RLS was introduced to avoid unnecessary differences, since we're still in alpha, per discussion with Robert.
* Revert "Fix an O(N^2) problem in foreign key references".Tom Lane2015-09-15
| | | | | | | | Commit 5ddc72887a012f6a8b85707ef27d85c274faf53d does not actually work because it will happily blow away ri_constraint_cache entries that are in active use in outer call levels. In any case, it's a very ugly, brute-force solution to the problem of limiting the cache size. Revert until it can be redesigned.
* Fix the fastpath rule for jsonb_concat with an empty operand.Andrew Dunstan2015-09-13
| | | | | | | | | | | To prevent perverse results, we now only return the other operand if it's not scalar, and if both operands are of the same kind (array or object). Original bug complaint and patch from Oskari Saarenmaa, extended by me to cover the cases of different kinds of jsonb. Backpatch to 9.5 where jsonb_concat was introduced.
* Fix an O(N^2) problem in foreign key references.Kevin Grittner2015-09-11
| | | | | | | | | | | | | | | | | | | | Commit 45ba424f improved foreign key lookups during bulk updates when the FK value does not change. When restoring a schema dump from a database with many (say 100,000) foreign keys, this cache would grow very big and every ALTER TABLE command was causing an InvalidateConstraintCacheCallBack(), which uses a sequential hash table scan. This could cause a severe performance regression in restoring a schema dump (including during pg_upgrade). The patch uses a heuristic method of detecting when the hash table should be destroyed and recreated. InvalidateConstraintCacheCallBack() adds the current size of the hash table to a counter. When that sum reaches 1,000,000, the hash table is flushed. This fixes the regression without noticeable harm to the bulk update use case. Jan Wieck Backpatch to 9.3 where the performance regression was introduced.
* Add gin_fuzzy_search_limit to postgresql.conf.sample.Fujii Masao2015-09-09
| | | | | | | This was forgotten in 8a3631f (commit that originally added the parameter) and 0ca9907 (commit that added the documentation later that year). Back-patch to all supported versions.
* Move DTK_ISODOW DTK_DOW and DTK_DOY to be type UNITS rather thanGreg Stark2015-09-06
| | | | | | | | | | | | | | | | RESERV. RESERV is meant for tokens like "now" and having them in that category throws errors like these when used as an input date: stark=# SELECT 'doy'::timestamptz; ERROR: unexpected dtype 33 while parsing timestamptz "doy" LINE 1: SELECT 'doy'::timestamptz; ^ stark=# SELECT 'dow'::timestamptz; ERROR: unexpected dtype 32 while parsing timestamptz "dow" LINE 1: SELECT 'dow'::timestamptz; ^ Found by LLVM's Libfuzzer
* Fix misc typos.Heikki Linnakangas2015-09-05
| | | | Oskari Saarenmaa. Backpatch to stable branches where applicable.
* Fix subtransaction cleanup after an outer-subtransaction portal fails.Tom Lane2015-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Formerly, we treated only portals created in the current subtransaction as having failed during subtransaction abort. However, if the error occurred while running a portal created in an outer subtransaction (ie, a cursor declared before the last savepoint), that has to be considered broken too. To allow reliable detection of which ones those are, add a bookkeeping field to struct Portal that tracks the innermost subtransaction in which each portal has actually been executed. (Without this, we'd end up failing portals containing functions that had called the subtransaction, thereby breaking plpgsql exception blocks completely.) In addition, when we fail an outer-subtransaction Portal, transfer its resources into the subtransaction's resource owner, so that they're released early in cleanup of the subxact. This fixes a problem reported by Jim Nasby in which a function executed in an outer-subtransaction cursor could cause an Assert failure or crash by referencing a relation created within the inner subtransaction. The proximate cause of the Assert failure is that AtEOSubXact_RelationCache assumed it could blow away a relcache entry without first checking that the entry had zero refcount. That was a bad idea on its own terms, so add such a check there, and to the similar coding in AtEOXact_RelationCache. This provides an independent safety measure in case there are still ways to provoke the situation despite the Portal-level changes. This has been broken since subtransactions were invented, so back-patch to all supported branches. Tom Lane and Michael Paquier
* Improve whitespacePeter Eisentraut2015-08-22
|
* Allow record_in() and record_recv() to work for transient record types.Tom Lane2015-08-21
| | | | | | | | | | | | | | | | | | If we have the typmod that identifies a registered record type, there's no reason that record_in() should refuse to perform input conversion for it. Now, in direct SQL usage, record_in() will always be passed typmod = -1 with type OID RECORDOID, because no typmodin exists for type RECORD, so the case can't arise. However, some InputFunctionCall users such as PLs may be able to supply the right typmod, so we should allow this to support them. Note: the previous coding and comment here predate commit 59c016aa9f490b53. There has been no case since 8.1 in which the passed type OID wouldn't be valid; and if it weren't, this error message wouldn't be apropos anyway. Better to let lookup_rowtype_tupdesc complain about it. Back-patch to 9.1, as this is necessary for my upcoming plpython fix. I'm committing it separately just to make it a bit more visible in the commit history.
* Don't use 'bool' as a struct member name in help_config.c.Andres Freund2015-08-15
| | | | | | | | | | | | Doing so doesn't work if bool is a macro rather than a typedef. Although c.h spends some effort to support configurations where bool is a preexisting macro, help_config.c has existed this way since 2003 (b700a6), and there have not been any reports of problems. Backpatch anyway since this is as riskless as it gets. Discussion: 20150812084351.GD8470@awork2.anarazel.de Backpatch: 9.0-master
* Encoding PG_UHC is code page 949.Noah Misch2015-08-14
| | | | | | | | | | | This fixes presentation of non-ASCII messages to the Windows event log and console in rare cases involving Korean locale. Processes like the postmaster and checkpointer, but not processes attached to databases, were affected. Back-patch to 9.4, where MessageEncoding was introduced. The problem exists in all supported versions, but this change has no effect in the absence of the code recognizing PG_UHC MessageEncoding. Noticed while investigating bug #13427 from Dmitri Bourlatchkov.
* Restore old pgwin32_message_to_UTF16() behavior outside transactions.Noah Misch2015-08-14
| | | | | | | | | | | Commit 49c817eab78c6f0ce8c3bf46766b73d6cf3190b7 replaced with a hard error the dubious pg_do_encoding_conversion() behavior when outside a transaction. Reintroduce the historic soft failure locally within pgwin32_message_to_UTF16(). This fixes errors when writing messages in less-common encodings to the Windows event log or console. Back-patch to 9.4, where the aforementioned commit first appeared. Per bug #13427 from Dmitri Bourlatchkov.
* Fix bogus "out of memory" reports in tuplestore.c.Tom Lane2015-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tuplesort/tuplestore memory management logic assumed that the chunk allocation overhead for its memtuples array could not increase when increasing the array size. This is and always was true for tuplesort, but we (I, I think) blindly copied that logic into tuplestore.c without noticing that the assumption failed to hold for the much smaller array elements used by tuplestore. Given rather small work_mem, this could result in an improper complaint about "unexpected out-of-memory situation", as reported by Brent DeSpain in bug #13530. The easiest way to fix this is just to increase tuplestore's initial array size so that the assumption holds. Rather than relying on magic constants, though, let's export a #define from aset.c that represents the safe allocation threshold, and make tuplestore's calculation depend on that. Do the same in tuplesort.c to keep the logic looking parallel, even though tuplesort.c isn't actually at risk at present. This will keep us from breaking it if we ever muck with the allocation parameters in aset.c. Back-patch to all supported versions. The error message doesn't occur pre-9.3, not so much because the problem can't happen as because the pre-9.3 tuplestore code neglected to check for it. (The chance of trouble is a great deal larger as of 9.3, though, due to changes in the array-size-increasing strategy.) However, allowing LACKMEM() to become true unexpectedly could still result in less-than-desirable behavior, so let's patch it all the way back.
* Cap wal_buffers to avoid a server crash when it's set very large.Robert Haas2015-08-04
| | | | | | | | | | It must be possible to multiply wal_buffers by XLOG_BLCKSZ without overflowing int, or calculations in StartupXLOG will go badly wrong and crash the server. Avoid that by imposing a maximum value on wal_buffers. This will be just under 2GB, assuming the usual value for XLOG_BLCKSZ. Josh Berkus, per an analysis by Andrew Gierth.
* Update comment to match behavior of latest code.Robert Haas2015-08-04
| | | | Peter Geoghegan
* Fix a number of places that produced XX000 errors in the regression tests.Tom Lane2015-08-02
| | | | | | | | | | | | | | | | | | | | It's against project policy to use elog() for user-facing errors, or to omit an errcode() selection for errors that aren't supposed to be "can't happen" cases. Fix all the violations of this policy that result in ERRCODE_INTERNAL_ERROR log entries during the standard regression tests, as errors that can reliably be triggered from SQL surely should be considered user-facing. I also looked through all the files touched by this commit and fixed other nearby problems of the same ilk. I do not claim to have fixed all violations of the policy, just the ones in these files. In a few places I also changed existing ERRCODE choices that didn't seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR by something more specific. Back-patch to 9.5, but no further; changing ERRCODE assignments in stable branches doesn't seem like a good idea.