aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Further adjust degree-based trig functions for more portability.Tom Lane2016-01-23
| | | | | | | | | | | | | | The last round didn't do it. Per Noah Misch, the problem on at least some machines is that the compiler pre-evaluates trig functions having constant arguments using code slightly different from what will be used at runtime. Therefore, we must prevent the compiler from seeing constant arguments to any of the libm trig functions used in this code. The method used here might still fail if init_degree_constants() gets inlined into the call sites. That probably won't happen given the large number of call sites; but if it does, we could probably fix it by making init_degree_constants() non-static. I'll avoid that till proven necessary, though.
* Adjust degree-based trig functions for more portability.Tom Lane2016-01-23
| | | | | | | | | | | | | | | | | | The buildfarm isn't very happy with the results of commit e1bd684a34c11139. To try to get the expected exact results everywhere: * Replace M_PI / 180 subexpressions with a precomputed constant, so that the compiler can't decide to rearrange that division with an adjacent operation. Hopefully this will fix failures to get exactly 0.5 from sind(30) and cosd(60). * Add scaling to ensure that tand(45) and cotd(45) give exactly 1; there was nothing particularly guaranteeing that before. * Replace minus zero by zero when tand() or cotd() would output that; many machines did so for tand(180) and cotd(270), but not all. We could alternatively deem both results valid, but that doesn't seem likely to be what users will want.
* psql: Improve completion of FDW DDL commandsPeter Eisentraut2016-01-23
| | | | | | | | | | | | Add - ALTER FOREIGN DATA WRAPPER -> RENAME TO - ALTER SERVER -> RENAME TO - ALTER SERVER ... VERSION ... -> OPTIONS - CREATE FOREIGN DATA WRAPPER -> OPTIONS - CREATE SERVER -> OPTIONS - CREATE|ALTER USER MAPPING -> OPTIONS From: Andreas Karlsson <andreas@proxel.se>
* pg_dump: Fix quoting of domain constraint namesAlvaro Herrera2016-01-22
| | | | | | | | | | | The original code was adding double quotes to an already-quoted identifier, leading to nonsensical results. Remove the quoting call. I introduced the broken code in 7eca575d1c of 9.5 era, so backpatch to 9.5. Report and patch by Elvis Pranskevichus Reviewed by Michael Paquier
* Add trigonometric functions that work in degrees.Tom Lane2016-01-22
| | | | | | | The implementations go to some lengths to deliver exact results for values where an exact result can be expected, such as sind(30) = 0.5 exactly. Dean Rasheed, reviewed by Michael Paquier
* Improve cross-platform consistency of Inf/NaN handling in trig functions.Tom Lane2016-01-22
| | | | | | | | | | | | | | | | | | | | | | | | Ensure that the trig functions return NaN for NaN input regardless of what the underlying C library functions might do. Also ensure that an error is thrown for Inf (or otherwise out-of-range) input, except for atan/atan2 which should accept it. All these behaviors should now conform to the POSIX spec; previously, all our popular platforms deviated from that in one case or another. The main remaining platform dependency here is whether the C library might choose to throw a domain error for sin/cos/tan inputs that are large but less than infinity. (Doing so is not unreasonable, since once a single unit-in-the-last-place exceeds PI, there can be no significance at all in the result; however there doesn't seem to be any suggestion in POSIX that such an error is allowed.) We will report such errors if they are reported via "errno", but not if they are reported via "fetestexcept" which is the other mechanism sanctioned by POSIX. Some preliminary experiments with fetestexcept indicated that it might also report errors we could do without, such as complaining about underflow at an unreasonably large threshold. So let's skip that complexity for now. Dean Rasheed, reviewed by Michael Paquier
* Remove new coupling between NAMEDATALEN and MAX_LEVENSHTEIN_STRLEN.Tom Lane2016-01-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit e529cd4ffa605c6f introduced an Assert requiring NAMEDATALEN to be less than MAX_LEVENSHTEIN_STRLEN, which has been 255 for a long time. Since up to that instant we had always allowed NAMEDATALEN to be substantially more than that, this was ill-advised. It's debatable whether we need MAX_LEVENSHTEIN_STRLEN at all (versus putting a CHECK_FOR_INTERRUPTS into the loop), or whether it has to be so tight; but this patch takes the narrower approach of just not applying the MAX_LEVENSHTEIN_STRLEN limit to calls from the parser. Trusting the parser for this seems reasonable, first because the strings are limited to NAMEDATALEN which is unlikely to be hugely more than 256, and second because the maximum distance is tightly constrained by MAX_FUZZY_DISTANCE (though we'd forgotten to make use of that limit in one place). That means the cost is not really O(mn) but more like O(max(m,n)). Relaxing the limit for user-supplied calls is left for future research; given the lack of complaints to date, it doesn't seem very high priority. In passing, fix confusion between lengths-in-bytes and lengths-in-chars in comments and error messages. Per gripe from Kevin Day; solution suggested by Robert Haas. Back-patch to 9.5 where the unwanted restriction was introduced.
* Make extract() do something more reasonable with infinite datetimes.Tom Lane2016-01-21
| | | | | | | | | | | | | Historically, extract() just returned zero for any case involving an infinite timestamp[tz] input; even cases in which the unit name was invalid. This is not very sensible. Instead, return infinity or -infinity as appropriate when the requested field is one that is monotonically increasing (e.g, year, epoch), or NULL when it is not (e.g., day, hour). Also, throw the expected errors for bad unit names. BACKWARDS INCOMPATIBLE CHANGE Vitaly Burovoy, reviewed by Vik Fearing
* Suppress compiler warning.Tom Lane2016-01-21
| | | | | | | | Given the limited range of i, these shifts should not cause any problem, but that apparently doesn't stop some compilers from whining about them. David Rowley
* Improve index AMs' opclass validation procedures.Tom Lane2016-01-21
| | | | | | | | | | | | | | | | | | | | The amvalidate functions added in commit 65c5fcd353a859da were on the crude side. Improve them in a few ways: * Perform signature checking for operators and support functions. * Apply more thorough checks for missing operators and functions, where possible. * Instead of reporting problems as ERRORs, report most problems as INFO messages and make the amvalidate function return FALSE. This allows more than one problem to be discovered per run. * Report object names rather than OIDs, and work a bit harder on making the messages understandable. Also, remove a few more opr_sanity regression test queries that are now superseded by the amvalidate checks.
* Add defenses against putting expanded objects into Const nodes.Tom Lane2016-01-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Putting a reference to an expanded-format value into a Const node would be a bad idea for a couple of reasons. It'd be possible for the supposedly immutable Const to change value, if something modified the referenced variable ... in fact, if the Const's reference were R/W, any function that has the Const as argument might itself change it at runtime. Also, because datumIsEqual() is pretty simplistic, the Const might fail to compare equal to other Consts that it should compare equal to, notably including copies of itself. This could lead to unexpected planner behavior, such as "could not find pathkey item to sort" errors or inferior plans. I have not been able to find any way to get an expanded value into a Const within the existing core code; but Paul Ramsey was able to trigger the problem by writing a datatype input function that returns an expanded value. The best fix seems to be to establish a rule that varlena values being placed into Const nodes should be passed through pg_detoast_datum(). That will do nothing (and cost little) in normal cases, but it will flatten expanded values and thereby avoid the above problems. Also, it will convert short-header or compressed values into canonical format, which will avoid possible unexpected lack-of-equality issues for those cases too. And it provides a last-ditch defense against putting a toasted value into a Const, which we already knew was dangerous, cf commit 2b0c86b66563cf2f. (In the light of this discussion, I'm no longer sure that that commit provided 100% protection against such cases, but this fix should do it.) The test added in commit 65c3d05e18e7c530 to catch datatype input functions with unstable results would fail for functions that returned expanded values; but it seems a bit uncharitable to deem a result unstable just because it's expressed in expanded form, so revise the coding so that we check for bitwise equality only after applying pg_detoast_datum(). That's a sufficient condition anyway given the new rule about detoasting when forming a Const. Back-patch to 9.5 where the expanded-object facility was added. It's possible that this should go back further; but in the absence of clear evidence that there's any live bug in older branches, I'll refrain for now.
* Remove unused argument from ginInsertCleanup()Fujii Masao2016-01-22
| | | | It's an oversight in commit dc943ad.
* Refactor headers to split out standby defsSimon Riggs2016-01-20
| | | | Jeff Janes
* Speedup 2PC by skipping two phase state files in normal pathSimon Riggs2016-01-20
| | | | | | | | | | | | | | | | | 2PC state info is written only to WAL at PREPARE, then read back from WAL at COMMIT PREPARED/ABORT PREPARED. Prepared transactions that live past one bufmgr checkpoint cycle will be written to disk in the same form as previously. Crash recovery path is not altered. Measured performance gains of 50-100% for short 2PC transactions by completely avoiding writing files and fsyncing. Other optimizations still available, further patches in related areas expected. Stas Kelvich and heavily edited by Simon Riggs Based upon earlier ideas and patches by Michael Paquier and Heikki Linnakangas, a concrete example of how Postgres-XC has fed back ideas into PostgreSQL. Reviewed by Michael Paquier, Jeff Janes and Andres Freund Performance testing by Jesper Pedersen
* psql: Add tab completion for COPY with queryPeter Eisentraut2016-01-20
| | | | From: Andreas Karlsson <andreas@proxel.se>
* Refactor to create generic WAL page read callbackSimon Riggs2016-01-20
| | | | | | | | | | Previously we didn’t have a generic WAL page read callback function, surprisingly. Logical decoding has logical_read_local_xlog_page(), which was actually generic, so move that to xlogfunc.c and rename to read_local_xlog_page(). Maintain logical_read_local_xlog_page() so existing callers still work. As requested by Michael Paquier, Alvaro Herrera and Andres Freund
* Support parallel joins, and make related improvements.Robert Haas2016-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | | The core innovation of this patch is the introduction of the concept of a partial path; that is, a path which if executed in parallel will generate a subset of the output rows in each process. Gathering a partial path produces an ordinary (complete) path. This allows us to generate paths for parallel joins by joining a partial path for one side (which at the baserel level is currently always a Partial Seq Scan) to an ordinary path on the other side. This is subject to various restrictions at present, especially that this strategy seems unlikely to be sensible for merge joins, so only nested loops and hash joins paths are generated. This also allows an Append node to be pushed below a Gather node in the case of a partitioned table. Testing revealed that early versions of this patch made poor decisions in some cases, which turned out to be caused by the fact that the original cost model for Parallel Seq Scan wasn't very good. So this patch tries to make some modest improvements in that area. There is much more to be done in the area of generating good parallel plans in all cases, but this seems like a useful step forward. Patch by me, reviewed by Dilip Kumar and Amit Kapila.
* Support multi-stage aggregation.Robert Haas2016-01-20
| | | | | | | | | | | | | | | | | | | Aggregate nodes now have two new modes: a "partial" mode where they output the unfinalized transition state, and a "finalize" mode where they accept unfinalized transition states rather than individual values as input. These new modes are not used anywhere yet, but they will be necessary for parallel aggregation. The infrastructure also figures to be useful for cases where we want to aggregate local data and remote data via the FDW interface, and want to bring back partial aggregates from the remote side that can then be combined with locally generated partial aggregates to produce the final value. It may also be useful even when neither FDWs nor parallelism are in play, as explained in the comments in nodeAgg.c. David Rowley and Simon Riggs, reviewed by KaiGai Kohei, Heikki Linnakangas, Haribabu Kommi, and me.
* PostgresNode: Add names to nodesAlvaro Herrera2016-01-20
| | | | | | | | This makes the log files easier to follow when investigating a test failure. Author: Michael Paquier Review: Noah Misch
* Properly install dynloader.h on MSVC buildsBruce Momjian2016-01-19
| | | | | | | | | | | This will enable PL/Java to be cleanly compiled, as dynloader.h is a requirement. Report by Chapman Flack Patch by Michael Paquier Backpatch through 9.1
* Fix assorted inconsistencies in GIN opclass support function declarations.Tom Lane2016-01-19
| | | | | | | | | | | GIN had some minor issues too, mostly using "internal" where something else would be more appropriate. I went with the same approach as in 9ff60273e35cad6e, namely preferring the opclass' indexed datatype for arguments that receive an operator RHS value, even if that's not necessarily what they really are. Again, this is with an eye to having a uniform rule for ginvalidate() to check support function signatures.
* Add two HyperLogLog functionsAlvaro Herrera2016-01-19
| | | | | | | | New functions initHyperLogLogError() and freeHyperLogLog() simplify using this module from elsewhere. Author: Tomáš Vondra Review: Peter Geoghegan
* Fix assorted inconsistencies in GiST opclass support function declarations.Tom Lane2016-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The conventions specified by the GiST SGML documentation were widely ignored. For example, the strategy-number argument for "consistent" and "distance" functions is specified to be a smallint, but most of the built-in support functions declared it as an integer, and for that matter the core code passed it using Int32GetDatum not Int16GetDatum. None of that makes any real difference at runtime, but it's quite confusing for newcomers to the code, and it makes it very hard to write an amvalidate() function that checks support function signatures. So let's try to instill some consistency here. Another similar issue is that the "query" argument is not of a single well-defined type, but could have different types depending on the strategy (corresponding to search operators with different righthand-side argument types). Some of the functions threw up their hands and declared the query argument as being of "internal" type, which surely isn't right ("any" would have been more appropriate); but the majority position seemed to be to declare it as being of the indexed data type, corresponding to a search operator with both input types the same. So I've specified a convention that that's what to do always. Also, the result of the "union" support function actually must be of the index's storage type, but the documentation suggested declaring it to return "internal", and some of the functions followed that. Standardize on telling the truth, instead. Similarly, standardize on declaring the "same" function's inputs as being of the storage type, not "internal". Also, somebody had forgotten to add the "recheck" argument to both the documentation of the "distance" support function and all of their SQL declarations, even though the C code was happily using that argument. Clean that up too. Fix up some other omissions in the docs too, such as documenting that union's second input argument is vestigial. So far as the errors in core function declarations go, we can just fix pg_proc.h and bump catversion. Adjusting the erroneous declarations in contrib modules is more debatable: in principle any change in those scripts should involve an extension version bump, which is a pain. However, since these changes are purely cosmetic and make no functional difference, I think we can get away without doing that.
* Remove Cygwin-specific code from pg_ctlAndrew Dunstan2016-01-19
| | | | | | | | | | This code has been there for a long time, but it's never really been needed. Cygwin has its own utility for registering, unregistering, stopping and starting Windows services, and that's what's used in the Cygwin postgres packages. So now pg_ctl for Cygwin looks like it is for any Unix platform. Michael Paquier and me
* Add explicit cast to amcostestimate call.Tom Lane2016-01-17
| | | | My compiler doesn't complain here, but David Rowley's does ...
* Restructure index access method API to hide most of it at the C level.Tom Lane2016-01-17
| | | | | | | | | | | | | | | | | | | | | | | | This patch reduces pg_am to just two columns, a name and a handler function. All the data formerly obtained from pg_am is now provided in a C struct returned by the handler function. This is similar to the designs we've adopted for FDWs and tablesample methods. There are multiple advantages. For one, the index AM's support functions are now simple C functions, making them faster to call and much less error-prone, since the C compiler can now check function signatures. For another, this will make it far more practical to define index access methods in installable extensions. A disadvantage is that SQL-level code can no longer see attributes of index AMs; in particular, some of the crosschecks in the opr_sanity regression test are no longer possible from SQL. We've addressed that by adding a facility for the index AM to perform such checks instead. (Much more could be done in that line, but for now we're content if the amvalidate functions more or less replace what opr_sanity used to do.) We might also want to expose some sort of reporting functionality, but this patch doesn't do that. Alexander Korotkov, reviewed by Petr Jelínek, and rather heavily editorialized on by me.
* Re-pgindent a few files.Tom Lane2016-01-17
| | | | In preparation for landing index AM interface changes.
* Remove dead code in pg_dump.Tom Lane2016-01-17
| | | | | | | | | | | | | Coverity quite reasonably complained that this check for fout==NULL occurred after we'd already dereferenced fout. However, the check is just dead code since there is no code path by which CreateArchive can return a null pointer. Errors such as can't-open-that-file are reported down inside CreateArchive, and control doesn't return. So let's silence the warning by removing the dead code, rather than continuing to pretend it does something. Coverity didn't complain about this before 5b5fea2a1, so back-patch to 9.5 like that patch.
* psql: Add completion support for DROP INDEX CONCURRENTLYPeter Eisentraut2016-01-16
| | | | based on patch by Kyotaro Horiguchi
* Fix minor typo in commentMagnus Hagander2016-01-15
| | | | Tatsuro Yamada
* Fix spelling mistakes.Robert Haas2016-01-14
| | | | Same patch submitted independently by David Rowley and Peter Geoghegan.
* Fix build_grouping_chain() to not clobber its input lists.Tom Lane2016-01-14
| | | | | | | | There's no good reason for stomping on the input data; it makes the logic in this function no simpler, in fact probably the reverse. And it makes it impossible to separate path generation from plan generation, as I'm working towards doing; that will require more than one traversal of these lists.
* Properly close token in sspi authenticationMagnus Hagander2016-01-14
| | | | | | | | We can never leak more than one token, but we shouldn't do that. We don't bother closing it in the error paths since the process will exit shortly anyway. Christian Ullrich
* Handle extension members when first setting object dump flags in pg_dump.Tom Lane2016-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_dump's original approach to handling extension member objects was to run around and clear (or set) their dump flags rather late in its data collection process. Unfortunately, quite a lot of code expects those flags to be valid before that; which was an entirely reasonable expectation before we added extensions. In particular, this explains Karsten Hilbert's recent report of pg_upgrade failing on a database in which an extension has been installed into the pg_catalog schema. Its objects are initially marked as not-to-be-dumped on the strength of their schema, and later we change them to must-dump because we're doing a binary upgrade of their extension; but we've already skipped essential tasks like making associated DO_SHELL_TYPE objects. To fix, collect extension membership data first, and incorporate it in the initial setting of the dump flags, so that those are once again correct from the get-go. This has the undesirable side effect of slightly lengthening the time taken before pg_dump acquires table locks, but testing suggests that the increase in that window is not very much. Along the way, get rid of ugly special-case logic for deciding whether to dump procedural languages, FDWs, and foreign servers; dump decisions for those are now correct up-front, too. In 9.3 and up, this also fixes erroneous logic about when to dump event triggers (basically, they were *always* dumped before). In 9.5 and up, transform objects had that problem too. Since this problem came in with extensions, back-patch to all supported versions.
* Access pg_dump's options structs through Archive struct, not directly.Tom Lane2016-01-13
| | | | | | | | | | | | | | | | | | Rather than passing around DumpOptions and RestoreOptions as separate arguments, add fields to struct Archive to carry pointers to these objects, and access them through those fields when needed. There already was a RestoreOptions pointer in Archive, though for no obvious reason it was part of the "private" struct rather than out where pg_dump.c could see it. Doing this allows reversion of quite a lot of parameter-addition changes made in commit 0eea8047bf, which is a good thing IMO because this will reduce the code delta between 9.4 and 9.5, probably easing a few future back-patch efforts. Moreover, the previous commit only added a DumpOptions argument to functions that had to have it at the time, which means we could anticipate still more code churn (and more back-patch hazard) as the requirement spread further. I'd hit exactly that problem in my upcoming patch to fix extension membership marking, which is what motivated me to do this.
* Run pgindent on src/bin/pg_dump/*Tom Lane2016-01-13
| | | | To ease doing indent fixups on a couple of patches I have in progress.
* psql: Improve CREATE INDEX CONCURRENTLY tab completionPeter Eisentraut2016-01-12
| | | | | | | | | | | | | | | The completion of CREATE INDEX CONCURRENTLY was lacking in several ways compared to a plain CREATE INDEX command: - CREATE INDEX <name> ON completes table names, but didn't with CONCURRENTLY. - CREATE INDEX completes ON and existing index names, but with CONCURRENTLY it only completed ON. - CREATE INDEX <name> completes ON, but didn't with CONCURRENTLY. These are now all fixed.
* psql: Fix CREATE INDEX tab completionPeter Eisentraut2016-01-12
| | | | | | | The previous code supported a syntax like CREATE INDEX name CONCURRENTLY, which never existed. Mistake introduced in commit 37ec19a15ce452ee94f32ebc3d6a9a45868e82fd. Remove the addition of CONCURRENTLY at that point.
* psql: Update tab completion commentPeter Eisentraut2016-01-12
| | | | | | This just updates a comment to match the code. from Michael Paquier
* Add new user fn pg_current_xlog_flush_location()Simon Riggs2016-01-12
| | | | | Tomas Vondra, reviewed by Michael Paquier and Amit Kapila Minor edits by me
* Maintain local LogwrtResult consistentlySimon Riggs2016-01-12
| | | | | Teach GetFlushRecPtr() to update LogwrtResult cache as performed by all other functions in xlog.c
* Remove no-longer-needed old-style check for incompatible plpythons.Tom Lane2016-01-11
| | | | | | | | | | Commit 866566a690bb9916 introduced a new mechanism for incompatible plpythons to detect each other. I left the old mechanism in place, because it seems possible that a plpython predating that commit might be used with one postdating it. (This would require updating plpython3 but not plpython2 or vice versa, but that seems well within the realm of possibility.) However, surely it will not be able to happen in 9.6 or later, so we can delete the old mechanism in HEAD.
* Avoid dump/reload problems when using both plpython2 and plpython3.Tom Lane2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
* Remove obsolete comment.Robert Haas2016-01-10
| | | | Noted while reviewing a question from Dickson S. Guedes.
* Remove a useless PG_GETARG_DATUM() call from jsonb_build_array.Tom Lane2016-01-09
| | | | | | | | | | | | | | | | This loop uselessly fetched the argument after the one it's currently looking at. No real harm is done since we couldn't possibly fetch off the end of memory, but it's confusing to the reader. Also remove a duplicate (and therefore confusing) PG_ARGISNULL check in jsonb_build_object. I happened to notice these things while trolling for missed null-arg checks earlier today. Back-patch to 9.5, not because there is any real bug, but just because 9.5 and HEAD are still in sync in this file and we might as well keep them so. In passing, re-pgindent.
* Add some checks on "char"-type columns to type_sanity and opr_sanity.Tom Lane2016-01-09
| | | | | | | | | | | | I noticed that the sanity checks in the regression tests omitted to check a couple of "poor man's enum" columns that you'd reasonably expect them to check. There are other "char"-type columns in system catalogs that are not covered by either type_sanity or opr_sanity, e.g. pg_rewrite.ev_type. However, those catalogs are not populated with any manually-created data during bootstrap, so it seems less necessary to check them this way.
* Clean up some lack-of-STRICT issues in the core code, too.Tom Lane2016-01-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A scan for missed proisstrict markings in the core code turned up these functions: brin_summarize_new_values pg_stat_reset_single_table_counters pg_stat_reset_single_function_counters pg_create_logical_replication_slot pg_create_physical_replication_slot pg_drop_replication_slot The first three of these take OID, so a null argument will normally look like a zero to them, resulting in "ERROR: could not open relation with OID 0" for brin_summarize_new_values, and no action for the pg_stat_reset_XXX functions. The other three will dump core on a null argument, though this is mitigated by the fact that they won't do so until after checking that the caller is superuser or has rolreplication privilege. In addition, the pg_logical_slot_get/peek[_binary]_changes family was intentionally marked nonstrict, but failed to make nullness checks on all the arguments; so again a null-pointer-dereference crash is possible but only for superusers and rolreplication users. Add the missing ARGISNULL checks to the latter functions, and mark the former functions as strict in pg_proc. Make that change in the back branches too, even though we can't force initdb there, just so that installations initdb'd in future won't have the issue. Since none of these bugs rise to the level of security issues (and indeed the pg_stat_reset_XXX functions hardly misbehave at all), it seems sufficient to do this. In addition, fix some order-of-operations oddities in the slot_get_changes family, mostly cosmetic, but not the part that moves the function's last few operations into the PG_TRY block. As it stood, there was significant risk for an error to exit without clearing historical information from the system caches. The slot_get_changes bugs go back to 9.4 where that code was introduced. Back-patch appropriate subsets of the pg_proc changes into all active branches, as well.
* Clean up code for widget_in() and widget_out().Tom Lane2016-01-09
| | | | | | | | | | | | | | | | | | | | | | | | | Given syntactically wrong input, widget_in() could call atof() with an indeterminate pointer argument, typically leading to a crash; or if it didn't do that, it might return a NULL pointer, which again would lead to a crash since old-style C functions aren't supposed to do things that way. Fix that by correcting the off-by-one syntax test and throwing a proper error rather than just returning NULL. Also, since widget_in and widget_out have been marked STRICT for a long time, their tests for null inputs are just dead code; remove 'em. In the oldest branches, also improve widget_out to use snprintf not sprintf, just to be sure. In passing, get rid of a long-since-useless sprintf into a local buffer that nothing further is done with, and make some other minor coding style cleanups. In the intended regression-testing usage of these functions, none of this is very significant; but if the regression test database were left around in a production installation, these bugs could amount to a minor security hazard. Piotr Stefaniak, Michael Paquier, and Tom Lane
* Revoke change to rmgr desc of btree vacuumSimon Riggs2016-01-09
| | | | Per discussion with Andres Freund
* Add STRICT to some C functions created by the regression tests.Tom Lane2016-01-09
| | | | | | | | | | These functions readily crash when passed a NULL input value. The tests themselves do not pass NULL values to them; but when the regression database is used as a basis for fuzz testing, they cause a lot of noise. Also, if someone were to leave a regression database lying about in a production installation, these would create a minor security hazard. Andreas Seltenreich