aboutsummaryrefslogtreecommitdiff
path: root/src/pl/plperl/plperl.c
Commit message (Collapse)AuthorAge
...
* Change tupledesc->attrs[n] to TupleDescAttr(tupledesc, n).Andres Freund2017-08-20
| | | | | | | | | | | This is a mechanical change in preparation for a later commit that will change the layout of TupleDesc. Introducing a macro to abstract the details of where attributes are stored will allow us to change that in separate step and revise it in future. Author: Thomas Munro, editorialized by Andres Freund Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com
* Tighten coding for non-composite case in plperl's return_next.Tom Lane2017-07-31
| | | | | | | | | | | | Coverity complained about this code's practice of using scalar variables as single-element arrays. While that's really just nitpicking, it probably is more readable to declare them as arrays, so let's do that. A more important point is that the code was just blithely assuming that the result tupledesc has exactly one column; if it doesn't, we'd likely get a crash of some sort in tuplestore_putvalues. Since the tupledesc is manufactured outside of plperl, that seems like an uncomfortably long chain of assumptions. We can nail it down at little cost with a sanity check earlier in the function.
* PL/Perl portability fix: avoid including XSUB.h in plperl.c.Tom Lane2017-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In Perl builds that define PERL_IMPLICIT_SYS, XSUB.h defines macros that replace a whole lot of basic libc functions with Perl functions. We can't tolerate that in plperl.c; it breaks at least PG_TRY and probably other stuff. The core idea of this patch is to include XSUB.h only in the .xs files where it's really needed, and to move any code broken by PERL_IMPLICIT_SYS out of the .xs files and into plperl.c. The reason this hasn't been a problem before is that our build techniques did not result in PERL_IMPLICIT_SYS appearing as a #define in PL/Perl, even on some platforms where Perl thinks it is defined. That's about to change in order to fix a nasty portability issue, so we need this work to make the code safe for that. Rather unaccountably, the Perl people chose XSUB.h as the place to provide the versions of the aTHX/aTHX_ macros that are needed by code that's not explicitly aware of the MULTIPLICITY API conventions. Hence, just removing XSUB.h from plperl.c fails miserably. But we can work around that by defining PERL_NO_GET_CONTEXT (which would make the relevant stanza of XSUB.h a no-op anyway). As explained in perlguts.pod, that means we need to add a "dTHX" macro call in every C function that calls a Perl API function. In most of them we just add this at the top; but since the macro fetches the current Perl interpreter pointer, more care is needed in functions that switch the active interpreter. Lack of the macro is easily recognized since it results in bleats about "my_perl" not being defined. (A nice side benefit of this is that it significantly reduces the number of fetches of the current interpreter pointer. On my machine, plperl.so gets more than 10% smaller, and there's probably some performance win too. We could reduce the number of fetches still more by decorating the code with pTHX_/aTHX_ macros to pass the interpreter pointer around, as explained by perlguts.pod; but that's a task for another day.) Formatting note: pgindent seems happy to treat "dTHX;" as a declaration so long as it's the first thing after the left brace, as we'd already observed with respect to the similar macro "dSP;". If you try to put it later in a set of declarations, pgindent puts ugly extra space around it. Having removed XSUB.h from plperl.c, we need only move the support functions for spi_return_next and util_elog (both of which use PG_TRY) out of the .xs files and into plperl.c. This seems sufficient to avoid the known problems caused by PERL_IMPLICIT_SYS, although we could move more code if additional issues emerge. This will need to be back-patched, but first let's see what the buildfarm makes of it. Patch by me, with some help from Ashutosh Sharma Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
* Phase 3 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | Don't move parenthesized lines to the left, even if that means they flow past the right margin. By default, BSD indent lines up statement continuation lines that are within parentheses so that they start just to the right of the preceding left parenthesis. However, traditionally, if that resulted in the continuation line extending to the right of the desired right margin, then indent would push it left just far enough to not overrun the margin, if it could do so without making the continuation line start to the left of the current statement indent. That makes for a weird mix of indentations unless one has been completely rigid about never violating the 80-column limit. This behavior has been pretty universally panned by Postgres developers. Hence, disable it with indent's new -lpl switch, so that parenthesized lines are always lined up with the preceding left paren. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Phase 2 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Initial pgindent run with pg_bsd_indent version 2.0.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new indent version includes numerous fixes thanks to Piotr Stefaniak. The main changes visible in this commit are: * Nicer formatting of function-pointer declarations. * No longer unexpectedly removes spaces in expressions using casts, sizeof, or offsetof. * No longer wants to add a space in "struct structname *varname", as well as some similar cases for const- or volatile-qualified pointers. * Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely. * Fixes bug where comments following declarations were sometimes placed with no space separating them from the code. * Fixes some odd decisions for comments following case labels. * Fixes some cases where comments following code were indented to less than the expected column 33. On the less good side, it now tends to put more whitespace around typedef names that are not listed in typedefs.list. This might encourage us to put more effort into typedef name collection; it's not really a bug in indent itself. There are more changes coming after this round, having to do with comment indentation and alignment of lines appearing within parentheses. I wanted to limit the size of the diffs to something that could be reviewed without one's eyes completely glazing over, so it seemed better to split up the changes as much as practical. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Post-PG 10 beta1 pgindent runBruce Momjian2017-05-17
| | | | perltidy run not included.
* Follow-on cleanup for the transition table patch.Kevin Grittner2017-04-04
| | | | | | | | | | | | | | | | | | | | | | Commit 59702716 added transition table support to PL/pgsql so that SQL queries in trigger functions could access those transient tables. In order to provide the same level of support for PL/perl, PL/python and PL/tcl, refactor the relevant code into a new function SPI_register_trigger_data. Call the new function in the trigger handler of all four PLs, and document it as a public SPI function so that authors of out-of-tree PLs can do the same. Also get rid of a second QueryEnvironment object that was maintained by PL/pgsql. That was previously used to deal with cursors, but the same approach wasn't appropriate for PLs that are less tangled up with core code. Instead, have SPI_cursor_open install the connection's current QueryEnvironment, as already happens for SPI_execute_plan. While in the docs, remove the note that transition tables were only supported in C and PL/pgSQL triggers, and correct some ommissions. Thomas Munro with some work by Kevin Grittner (mostly docs)
* Remove useless duplicate inclusions of system header files.Tom Lane2017-02-25
| | | | | | | | | | | | | | | | c.h #includes a number of core libc header files, such as <stdio.h>. There's no point in re-including these after having read postgres.h, postgres_fe.h, or c.h; so remove code that did so. While at it, also fix some places that were ignoring our standard pattern of "include postgres[_fe].h, then system header files, then other Postgres header files". While there's not any great magic in doing it that way rather than system headers last, it's silly to have just a few files deviating from the general pattern. (But I didn't attempt to enforce this globally, only in files I was touching anyway.) I'd be the first to say that this is mostly compulsive neatnik-ism, but over time it might save enough compile cycles to be useful.
* Volatile-ize some plperl variables that must survive into PG_CATCH blocks.Tom Lane2017-01-23
| | | | | | | | | This appears to be necessary to fix a failure seen on buildfarm member sittella. It shouldn't be necessary according to the letter of the C standard, because we don't change the values of these variables within the PG_TRY blocks; but somehow gcc 4.7.2 is dropping the ball. Discussion: https://postgr.es/m/17555.1485179975@sss.pgh.pa.us
* Simplify code by getting rid of SPI_push, SPI_pop, SPI_restore_connection.Tom Lane2016-11-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The idea behind SPI_push was to allow transitioning back into an "unconnected" state when a SPI-using procedure calls unrelated code that might or might not invoke SPI. That sounds good, but in practice the only thing it does for us is to catch cases where a called SPI-using function forgets to call SPI_connect --- which is a highly improbable failure mode, since it would be exposed immediately by direct testing of said function. As against that, we've had multiple bugs induced by forgetting to call SPI_push/SPI_pop around code that might invoke SPI-using functions; these are much harder to catch and indeed have gone undetected for years in some cases. And we've had to band-aid around some problems of this ilk by introducing conditional push/pop pairs in some places, which really kind of defeats the purpose altogether; if we can't draw bright lines between connected and unconnected code, what's the point? Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the underlying state variable _SPI_curid. It turns out SPI_restore_connection can go away too, which is a nice side benefit since it was never more than a kluge. Provide no-op macros for the deleted functions so as to avoid an API break for external modules. A side effect of this removal is that SPI_palloc and allied functions no longer permit being called when unconnected; they'll throw an error instead. The apparent usefulness of the previous behavior was a mirage as well, because it was depended on by only a few places (which I fixed in preceding commits), and it posed a risk of allocations being unexpectedly long-lived if someone forgot a SPI_push call. Discussion: <20808.1478481403@sss.pgh.pa.us>
* Make SPI_fnumber() reject dropped columns.Tom Lane2016-11-08
| | | | | | | | | | | | | | | | | | | | | There's basically no scenario where it's sensible for this to match dropped columns, so put a test for dropped-ness into SPI_fnumber() itself, and excise the test from the small number of callers that were paying attention to the case. (Most weren't :-(.) In passing, normalize tests at call sites: always reject attnum <= 0 if we're disallowing system columns. Previously there was a mixture of "< 0" and "<= 0" tests. This makes no practical difference since SPI_fnumber() never returns 0, but I'm feeling pedantic today. Also, in the places that are actually live user-facing code and not legacy cruft, distinguish "column not found" from "can't handle system column". Per discussion with Jim Nasby; thi supersedes his original patch that just changed the behavior at one call site. Discussion: <b2de8258-c4c0-1cb8-7b97-e8538e5c975c@BlueTreble.com>
* Use heap_modify_tuple not SPI_modifytuple in pl/perl triggers.Tom Lane2016-11-08
| | | | | | | | | | | | | | | The code here would need some change anyway given planned change in SPI_modifytuple semantics, since this executes after we've exited the SPI environment. But really it's better to just use heap_modify_tuple. The code's actually shorter this way, and this avoids depending on some rather indirect reasoning about why the temporary arrays can't be overrun. (I think the old code is safe, as long as Perl hashes can't contain duplicate keys; but with this way we don't need that assumption, only the assumption that SPI_fnumber doesn't return an out-of-range attnum.) While at it, normalize use of SPI_fnumber: make error messages distinguish no-such-column from can't-set-system-column, and remove test for deleted column which is going to migrate into SPI_fnumber.
* Improve memory management for PL/Perl functions.Tom Lane2016-08-31
| | | | | | | | | | | | | | | | | | Unlike PL/Tcl, PL/Perl at least made an attempt to clean up after itself when a function gets redefined. But it was still using TopMemoryContext for the fn_mcxt of argument/result I/O functions, resulting in the potential for memory leaks depending on what those functions did, and the retail alloc/free logic was pretty bulky as well. Fix things to use a per-function memory context like the other PLs now do. Tweak a couple of places where things were being done in a not-very-safe order (on the principle that a memory leak is better than leaving global state inconsistent after an error). Also make some minor cosmetic adjustments, mostly in field names, to make the code look similar to the way PL/Tcl does now wherever it's essentially the same logic. Michael Paquier and Tom Lane Discussion: <CAB7nPqSOyAsHC6jL24J1B+oK3p=yyNoFU0Vs_B6fd2kdd5g5WQ@mail.gmail.com>
* Add macros to make AllocSetContextCreate() calls simpler and safer.Tom Lane2016-08-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls had typos in the context-sizing parameters. While none of these led to especially significant problems, they did create minor inefficiencies, and it's now clear that expecting people to copy-and-paste those calls accurately is not a great idea. Let's reduce the risk of future errors by introducing single macros that encapsulate the common use-cases. Three such macros are enough to cover all but two special-purpose contexts; those two calls can be left as-is, I think. While this patch doesn't in itself improve matters for third-party extensions, it doesn't break anything for them either, and they can gradually adopt the simplified notation over time. In passing, change TopMemoryContext to use the default allocation parameters. Formerly it could only be extended 8K at a time. That was probably reasonable when this code was written; but nowadays we create many more contexts than we did then, so that it's not unusual to have a couple hundred K in TopMemoryContext, even without considering various dubious code that sticks other things there. There seems no good reason not to let it use growing blocks like most other contexts. Back-patch to 9.6, mostly because that's still close enough to HEAD that it's easy to do so, and keeping the branches in sync can be expected to avoid some future back-patching pain. The bugs fixed by these changes don't seem to be significant enough to justify fixing them further back. Discussion: <21072.1472321324@sss.pgh.pa.us>
* Copyedit comments and documentation.Noah Misch2016-04-01
|
* Update PL/Perl's comment about hv_store().Tom Lane2016-03-14
| | | | | | | Negative klen is documented since Perl 5.16, and 5.6 is no longer supported so no need to comment about it. Dagfinn Ilmari Mannsåker
* Improve conversions from uint64 to Perl types.Tom Lane2016-03-14
| | | | | | | | | | | | | Perl's integers are pointer-sized, so can hold more than INT_MAX on LP64 platforms, and come in both signed (IV) and unsigned (UV). Floating point values (NV) may also be larger than double. Since Perl 5.19.4 array indices are SSize_t instead of I32, so allow up to SSize_t_max on those versions. The limit is not imposed just by av_extend's argument type, but all the array handling code, so remove the speculative comment. Dagfinn Ilmari Mannsåker
* Widen query numbers-of-tuples-processed counters to uint64.Tom Lane2016-03-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch widens SPI_processed, EState's es_processed field, PortalData's portalPos field, FuncCallContext's call_cntr and max_calls fields, ExecutorRun's count argument, PortalRunFetch's result, and the max number of rows in a SPITupleTable to uint64, and deals with (I hope) all the ensuing fallout. Some of these values were declared uint32 before, and others "long". I also removed PortalData's posOverflow field, since that logic seems pretty useless given that portalPos is now always 64 bits. The user-visible results are that command tags for SELECT etc will correctly report tuple counts larger than 4G, as will plpgsql's GET GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples than that are still not exactly the norm, but they're becoming more common. Most values associated with FETCH/MOVE distances, such as PortalRun's count argument and the count argument of most SPI functions that have one, remain declared as "long". It's not clear whether it would be worth promoting those to int64; but it would definitely be a large dollop of additional API churn on top of this, and it would only help 32-bit platforms which seem relatively less likely to see any benefit. Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
* plperl: Correctly handle empty arrays in plperl_ref_from_pg_array.Andres Freund2016-03-08
| | | | | | | | | | | | plperl_ref_from_pg_array() didn't consider the case that postgrs arrays can have 0 dimensions (when they're empty) and accessed the first dimension without a check. Fix that by special casing the empty array case. Author: Alex Hunsaker Reported-By: Andres Freund / valgrind / buildfarm animal skink Discussion: 20160308063240.usnzg6bsbjrne667@alap3.anarazel.de Backpatch: 9.1-
* Instruct Coverity using an assertion.Noah Misch2015-12-05
| | | | | | | | This should make Coverity deduce that plperl_call_perl_func() does not dereference NULL argtypes. Back-patch to 9.5, where the affected code was introduced. Michael Paquier
* Fix thinko: errmsg -> ereport.Tom Lane2015-11-19
| | | | | | | | | Silly mistake in my commit 09cecdf285ea9f51, reported by Erik Rijkers. The fact that the buildfarm didn't find this implies that we are not testing Perl builds that lack MULTIPLICITY, which is a bit disturbing from a coverage standpoint. Until today I'd have said nobody cared about such configurations anymore; but maybe not.
* Fix plperl to handle non-ASCII error message texts correctly.Tom Lane2015-09-29
| | | | | | | | | | | | | | | | | We were passing error message texts to croak() verbatim, which turns out not to work if the text contains non-ASCII characters; Perl mangles their encoding, as reported in bug #13638 from Michal Leinweber. To fix, convert the text into a UTF8-encoded SV first. It's hard to test this without risking failures in different database encodings; but we can follow the lead of plpython, which is already assuming that no-break space (U+00A0) has an equivalent in all encodings we care about running the regression tests in (cf commit 2dfa15de5). Back-patch to 9.1. The code is quite different in 9.0, and anyway it seems too risky to put something like this into 9.0's final minor release. Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane
* Don't use function definitions looking like old-style ones.Andres Freund2015-08-15
| | | | | | | This fixes a bunch of somewhat pedantic warnings with new compilers. Since by far the majority of other functions definitions use the (void) style it just seems to be consistent to do so as well in the remaining few places.
* Fix a number of places that produced XX000 errors in the regression tests.Tom Lane2015-08-02
| | | | | | | | | | | | | | | | | | | | It's against project policy to use elog() for user-facing errors, or to omit an errcode() selection for errors that aren't supposed to be "can't happen" cases. Fix all the violations of this policy that result in ERRCODE_INTERNAL_ERROR log entries during the standard regression tests, as errors that can reliably be triggered from SQL surely should be considered user-facing. I also looked through all the files touched by this commit and fixed other nearby problems of the same ilk. I do not claim to have fixed all violations of the policy, just the ones in these files. In a few places I also changed existing ERRCODE choices that didn't seem particularly appropriate; mainly replacing ERRCODE_SYNTAX_ERROR by something more specific. Back-patch to 9.5, but no further; changing ERRCODE assignments in stable branches doesn't seem like a good idea.
* pgindent run for 9.5Bruce Momjian2015-05-23
|
* Add transforms featurePeter Eisentraut2015-04-26
| | | | | | | | This provides a mechanism for specifying conversions between SQL data types and procedural languages. As examples, there are transforms for hstore and ltree for PL/Perl and PL/Python. reviews by Pavel Stěhule and Andres Freund
* In array_agg(), don't create a new context for every group.Jeff Davis2015-02-21
| | | | | | | | | | | | | | | Previously, each new array created a new memory context that started out at 8kB. This is incredibly wasteful when there are lots of small groups of just a few elements each. Change initArrayResult() and friends to accept a "subcontext" argument to indicate whether the caller wants the ArrayBuildState allocated in a new subcontext or not. If not, it can no longer be released separately from the rest of the memory context. Fixes bug report by Frank van Vugt on 2013-10-19. Tomas Vondra. Reviewed by Ali Akbar, Tom Lane, and me.
* Improve hash_create's API for selecting simple-binary-key hash functions.Tom Lane2014-12-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if you wanted anything besides C-string hash keys, you had to specify a custom hashing function to hash_create(). Nearly all such callers were specifying tag_hash or oid_hash; which is tedious, and rather error-prone, since a caller could easily miss the opportunity to optimize by using hash_uint32 when appropriate. Replace this with a design whereby callers using simple binary-data keys just specify HASH_BLOBS and don't need to mess with specific support functions. hash_create() itself will take care of optimizing when the key size is four bytes. This nets out saving a few hundred bytes of code space, and offers a measurable performance improvement in tidbitmap.c (which was not exploiting the opportunity to use hash_uint32 for its 4-byte keys). There might be some wins elsewhere too, I didn't analyze closely. In future we could look into offering a similar optimized hashing function for 8-byte keys. Under this design that could be done in a centralized and machine-independent fashion, whereas getting it right for keys of platform-dependent sizes would've been notationally painful before. For the moment, the old way still works fine, so as not to break source code compatibility for loadable modules. Eventually we might want to remove tag_hash and friends from the exported API altogether, since there's no real need for them to be explicitly referenced from outside dynahash.c. Teodor Sigaev and Tom Lane
* Support arrays as input to array_agg() and ARRAY(SELECT ...).Tom Lane2014-11-25
| | | | | | | | | | | | | | | These cases formerly failed with errors about "could not find array type for data type". Now they yield arrays of the same element type and one higher dimension. The implementation involves creating functions with API similar to the existing accumArrayResult() family. I (tgl) also extended the base family by adding an initArrayResult() function, which allows callers to avoid special-casing the zero-inputs case if they just want an empty array as result. (Not all do, so the previous calling convention remains valid.) This allowed simplifying some existing code in xml.c and plperl.c. Ali Akbar, reviewed by Pavel Stehule, significantly modified by me
* Adjust blank lines around PG_MODULE_MAGIC defines, for consistencyBruce Momjian2014-07-10
| | | | Report by Robert Haas
* pgindent run for 9.4Bruce Momjian2014-05-06
| | | | | This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
* Create function prototype as part of PG_FUNCTION_INFO_V1 macroPeter Eisentraut2014-04-18
| | | | | | | | | | | | | | | | | Because of gcc -Wmissing-prototypes, all functions in dynamically loadable modules must have a separate prototype declaration. This is meant to detect global functions that are not declared in header files, but in cases where the function is called via dfmgr, this is redundant. Besides filling up space with boilerplate, this is a frequent source of compiler warnings in extension modules. We can fix that by creating the function prototype as part of the PG_FUNCTION_INFO_V1 macro, which such modules have to use anyway. That makes the code of modules cleaner, because there is one less place where the entry points have to be listed, and creates an additional check that functions have the right prototype. Remove now redundant prototypes from contrib and other modules.
* Add new to_reg* functions for error-free OID lookups.Robert Haas2014-04-08
| | | | | | | | | These functions won't throw an error if the object doesn't exist, or if (for functions and operators) there's more than one matching object. Yugo Nagata and Nozomi Anzai, reviewed by Amit Khandekar, Marti Raudsepp, Amit Kapila, and me.
* plperl: Fix memory leak in hek2cstrAlvaro Herrera2014-03-16
| | | | | | | | Backpatch all the way back to 9.1, where it was introduced by commit 50d89d42. Reported by Sergey Burladyan in #9223 Author: Alex Hunsaker
* C comments: remove odd blank lines after #ifdef WIN32 linesBruce Momjian2014-03-13
|
* Prefer pg_any_to_server/pg_server_to_any over pg_do_encoding_conversion.Tom Lane2014-02-23
| | | | | | | | | | | | | | | | | | | | A large majority of the callers of pg_do_encoding_conversion were specifying the database encoding as either source or target of the conversion, meaning that we can use the less general functions pg_any_to_server/pg_server_to_any instead. The main advantage of using the latter functions is that they can make use of a cached conversion-function lookup in the common case that the other encoding is the current client_encoding. It's notationally cleaner too in most cases, not least because of the historical artifact that the latter functions use "char *" rather than "unsigned char *" in their APIs. Note that pg_any_to_server will apply an encoding verification step in some cases where pg_do_encoding_conversion would have just done nothing. This seems to me to be a good idea at most of these call sites, though it partially negates the performance benefit. Per discussion of bug #9210.
* Prevent privilege escalation in explicit calls to PL validators.Noah Misch2014-02-17
| | | | | | | | | | | | | | The primary role of PL validators is to be called implicitly during CREATE FUNCTION, but they are also normal functions that a user can call explicitly. Add a permissions check to each validator to ensure that a user cannot use explicit validator calls to achieve things he could not otherwise achieve. Back-patch to 8.4 (all supported versions). Non-core procedural language extensions ought to make the same two-line change to their own validators. Andres Freund, reviewed by Tom Lane and Noah Misch. Security: CVE-2014-0061
* PL/Perl: Fix compiler warningPeter Eisentraut2014-02-04
| | | | | The code was assigning a (Datum) 0 to a void pointer. That creates a warning from clang 3.4. It was probably a thinko to begin with.
* Change the way we mark tuples as frozen.Robert Haas2013-12-22
| | | | | | | | | | | | | | | | | | | | | | | | | | Instead of changing the tuple xmin to FrozenTransactionId, the combination of HEAP_XMIN_COMMITTED and HEAP_XMIN_INVALID, which were previously never set together, is now defined as HEAP_XMIN_FROZEN. A variety of previous proposals to freeze tuples opportunistically before vacuum_freeze_min_age is reached have foundered on the objection that replacing xmin by FrozenTransactionId might hinder debugging efforts when things in this area go awry; this patch is intended to solve that problem by keeping the XID around (but largely ignoring the value to which it is set). Third-party code that checks for HEAP_XMIN_INVALID on tuples where HEAP_XMIN_COMMITTED might be set will be broken by this change. To fix, use the new accessor macros in htup_details.h rather than consulting the bits directly. HeapTupleHeaderGetXmin has been modified to return FrozenTransactionId when the infomask bits indicate that the tuple is frozen; use HeapTupleHeaderGetRawXmin when you already know that the tuple isn't marked commited or frozen, or want the raw value anyway. We currently do this in routines that display the xmin for user consumption, in tqual.c where it's known to be safe and important for the avoidance of extra cycles, and in the function-caching code for various procedural languages, which shouldn't invalidate the cache just because the tuple gets frozen. Robert Haas and Andres Freund
* PL/Perl: Add event trigger supportPeter Eisentraut2013-12-11
| | | | From: Dimitri Fontaine <dimitri@2ndQuadrant.fr>
* pgindent run for release 9.3Bruce Momjian2013-05-29
| | | | | This is the first run of the Perl-based pgindent script. Also update pgindent instructions.
* Move pqsignal() to libpgport.Tom Lane2013-03-17
| | | | | | | | | We had two copies of this function in the backend and libpq, which was already pretty bogus, but it turns out that we need it in some other programs that don't use libpq (such as pg_test_fsync). So put it where it probably should have been all along. The signal-mask-initialization support in src/backend/libpq/pqsignal.c stays where it is, though, since we only need that in the backend.
* Eliminate memory leaks in plperl's spi_prepare() function.Tom Lane2013-03-01
| | | | | | | | | | | | | | Careless use of TopMemoryContext for I/O function data meant that repeated use of spi_prepare and spi_freeplan would leak memory at the session level, as per report from Christian Schröder. In addition, spi_prepare leaked a lot of transient data within the current plperl function's SPI Proc context, which would be a problem for repeated use of spi_prepare within a single plperl function call; and it wasn't terribly careful about releasing permanent allocations in event of an error, either. In passing, clean up some copy-and-pasteos in query-lookup error messages. Alex Hunsaker and Tom Lane
* Keep plperl's current_call_data record on the stack, instead of palloc'ing.Tom Lane2012-09-13
| | | | | | This at least saves some palloc overhead, and should furthermore reduce the risk of anything going wrong, eg somebody resetting the context the current_call_data record was in.
* Make plperl safe against functions that are redefined while running.Tom Lane2012-09-09
| | | | | | | | | | | | | | | | | | validate_plperl_function() supposed that it could free an old plperl_proc_desc struct immediately upon detecting that it was stale. However, if a plperl function is called recursively, this could result in deleting the struct out from under an outer invocation, leading to misbehavior or crashes. Add a simple reference-count mechanism to ensure that such structs are freed only when the last reference goes away. Per investigation of bug #7516 from Marko Tiikkaja. I am not certain that this error explains his report, because he says he didn't have any recursive calls --- but it's hard to see how else it could have crashed right there. In any case, this definitely fixes some problems in the area. Back-patch to all active branches.
* Restore SIGFPE handler after initializing PL/Perl.Tom Lane2012-09-05
| | | | | | | | | | | | Perl, for some unaccountable reason, believes it's a good idea to reset SIGFPE handling to SIG_IGN. Which wouldn't be a good idea even if it worked; but on some platforms (Linux at least) it doesn't work at all, instead resulting in forced process termination if the signal occurs. Given the lack of other complaints, it seems safe to assume that Perl never actually provokes SIGFPE and so there is no value in the setting anyway. Hence, reset it to our normal handler after initializing Perl. Report, analysis and patch by Andres Freund.
* Split tuple struct defs from htup.h to htup_details.hAlvaro Herrera2012-08-30
| | | | | | | | | | | | This reduces unnecessary exposure of other headers through htup.h, which is very widely included by many files. I have chosen to move the function prototypes to the new file as well, because that means htup.h no longer needs to include tupdesc.h. In itself this doesn't have much effect in indirect inclusion of tupdesc.h throughout the tree, because it's also required by execnodes.h; but it's something to explore in the future, and it seemed best to do the htup.h change now while I'm busy with it.
* Add comment why seemingly dead code is necessaryPeter Eisentraut2012-07-16
|
* Run pgindent on 9.2 source tree in preparation for first 9.3Bruce Momjian2012-06-10
| | | | commit-fest.