| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new variant of generated columns that are computed on read
(like a view, unlike the existing stored generated columns, which are
computed on write, like a materialized view).
The syntax for the column definition is
... GENERATED ALWAYS AS (...) VIRTUAL
and VIRTUAL is also optional. VIRTUAL is the default rather than
STORED to match various other SQL products. (The SQL standard makes
no specification about this, but it also doesn't know about VIRTUAL or
STORED.) (Also, virtual views are the default, rather than
materialized views.)
Virtual generated columns are stored in tuples as null values. (A
very early version of this patch had the ambition to not store them at
all. But so much stuff breaks or gets confused if you have tuples
where a column in the middle is completely missing. This is a
compromise, and it still saves space over being forced to use stored
generated columns. If we ever find a way to improve this, a bit of
pg_upgrade cleverness could allow for upgrades to a newer scheme.)
The capabilities and restrictions of virtual generated columns are
mostly the same as for stored generated columns. In some cases, this
patch keeps virtual generated columns more restricted than they might
technically need to be, to keep the two kinds consistent. Some of
that could maybe be relaxed later after separate careful
considerations.
Some functionality that is currently not supported, but could possibly
be added as incremental features, some easier than others:
- index on or using a virtual column
- hence also no unique constraints on virtual columns
- extended statistics on virtual columns
- foreign-key constraints on virtual columns
- not-null constraints on virtual columns (check constraints are supported)
- ALTER TABLE / DROP EXPRESSION
- virtual column cannot have domain type
- virtual columns are not supported in logical replication
The tests in generated_virtual.sql have been copied over from
generated_stored.sql with the keyword replaced. This way we can make
sure the behavior is mostly aligned, and the differences can be
visible. Some tests for currently not supported features are
currently commented out.
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Tested-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/a368248e-69e4-40be-9c07-6c3b5880b0a6@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TestUpgradeXversion knows how to make the main regression database's
references to pg_regress.so be version-independent. But it doesn't
do that for plperl's database, so that the C function added by
commit b7e3a52a8 is causing cross-version upgrade test failures.
Path of least resistance is to just drop the function at the end
of the new test.
In <= v14, also take the opportunity to clean up the generated
test files.
Security: CVE-2024-10979
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many process environment variables (e.g. PATH), bypass the containment
expected of a trusted PL. Hence, trusted PLs must not offer features
that achieve setenv(). Otherwise, an attacker having USAGE privilege on
the language often can achieve arbitrary code execution, even if the
attacker lacks a database server operating system user.
To fix PL/Perl, replace trusted PL/Perl %ENV with a tied hash that just
replaces each modification attempt with a warning. Sites that reach
these warnings should evaluate the application-specific implications of
proceeding without the environment modification:
Can the application reasonably proceed without the modification?
If no, switch to plperlu or another approach.
If yes, the application should change the code to stop attempting
environment modifications. If that's too difficult, add "untie
%main::ENV" in any code executed before the warning. For example,
one might add it to the start of the affected function or even to
the plperl.on_plperl_init setting.
In passing, link to Perl's guidance about the Perl features behind the
security posture of PL/Perl.
Back-patch to v12 (all supported versions).
Andrew Dunstan and Noah Misch
Security: CVE-2024-10979
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
plperl_array_to_datum() wasn't sufficiently careful about checking
that nested lists represent a rectangular array structure; it would
accept inputs such as "[1, []]". This is a bit related to the
PL/Python bug fixed in commit 81eaaf65e, but it doesn't seem to
provide any direct route to a memory stomp. Instead the likely
failure mode is for makeMdArrayResult to be passed fewer Datums than
the claimed array dimensionality requires, possibly leading to a wild
pointer dereference and SIGSEGV.
Per report from Alexander Lakhin. It's been broken for a long
time, so back-patch to all supported branches.
Discussion: https://postgr.es/m/5ebae5e4-d401-fadf-8585-ac3eaf53219c@gmail.com
|
|
|
|
|
|
|
| |
These cause problems when running installcheck-world USE_MODULE_DB=1 with -j.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/20221003234111.4ob7yph6r4g4ywhu@awork3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We allow users to set the values of not-yet-loaded extension GUCs,
remembering those values in "placeholder" GUC entries. When/if
the extension is loaded later in the session, we need to verify that
the user had permissions to set the GUC. That was done correctly
before commit a0ffa885e, but as of that commit, we'd check the
permissions of the active role when the LOAD happens, not the role
that had set the value. (This'd be a security bug if it had made it
into a released version.)
In principle this is simple enough to fix: we just need to remember
the exact role OID that set each GUC value, and use that not
GetUserID() when verifying permissions. Maintaining that data in
the guc.c data structures is slightly tedious, but fortunately it's
all basically just copy-n-paste of the logic for tracking the
GucSource of each setting, as we were already doing.
Another oversight is that validate_option_array_item() hadn't
been taught to check for granted GUC privileges. This appears
to manifest only in that ALTER ROLE/DATABASE RESET ALL will
fail to reset settings that the user should be allowed to reset.
Patch by myself and Nathan Bossart, per report from Nathan Bossart.
Back-patch to v15 where the faulty code came in.
Discussion: https://postgr.es/m/20220706224727.GA2158260@nathanxps13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Perl 5.36 has reclassified the warning condition that this test
case used, so that the expected error fails to appear. Tweak
the test so it instead exercises a case that's handled the same
way in all Perl versions of interest.
This appears to meet our standards for back-patching into
out-of-support branches: it changes no user-visible behavior
but enables testing of old branches with newer tools.
Hence, back-patch as far as 9.2.
Dagfinn Ilmari Mannsåker, per report from Jitka Plesníková.
Discussion: https://postgr.es/m/564579.1654093326@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SPI_commit previously left it up to the caller to recover from any error
occurring during commit. Since that's complicated and requires use of
low-level xact.c facilities, it's not too surprising that no caller got
it right. Let's move the responsibility for cleanup into spi.c. Doing
that requires redefining SPI_commit as starting a new transaction, so
that it becomes equivalent to SPI_commit_and_chain except that you get
default transaction characteristics instead of preserving the prior
transaction's characteristics. We can make this pretty transparent
API-wise by redefining SPI_start_transaction() as a no-op. Callers
that expect to do something in between might be surprised, but
available evidence is that no callers do so.
Having made that API redefinition, we can fix this mess by having
SPI_commit[_and_chain] trap errors and start a new, clean transaction
before re-throwing the error. Likewise for SPI_rollback[_and_chain].
Some cleanup is also needed in AtEOXact_SPI, which was nowhere near
smart enough to deal with SPI contexts nested inside a committing
context.
While plperl and pltcl need no changes beyond removing their now-useless
SPI_start_transaction() calls, plpython needs some more work because it
hadn't gotten the memo about catching commit/rollback errors in the
first place. Such an error resulted in longjmp'ing out of the Python
interpreter, which leaks Python stack entries at present and is reported
to crash Python 3.11 altogether. Add the missing logic to catch such
errors and convert them into Python exceptions.
We are probably going to have to back-patch this once Python 3.11 ships,
but it's a sufficiently basic change that I'm a bit nervous about doing
so immediately. Let's let it bake awhile in HEAD first.
Peter Eisentraut and Tom Lane
Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com
Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This switches the default ACL to what the documentation has recommended
since CVE-2018-1058. Upgrades will carry forward any old ownership and
ACL. Sites that declined the 2018 recommendation should take a fresh
look. Recipes for commissioning a new database cluster from scratch may
need to create a schema, grant more privileges, etc. Out-of-tree test
suites may require such updates.
Reviewed by Peter Eisentraut.
Discussion: https://postgr.es/m/20201031163518.GB4039133@rfd.leadboat.com
|
|
|
|
|
|
|
|
|
|
| |
Unlike for functions, OUT parameters for procedures are part of the
signature. Therefore, they have to be listed in pg_proc.proargtypes
as well as mentioned in ALTER PROCEDURE and DROP PROCEDURE.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/2b8490fe-51af-e671-c504-47359dc453c5@2ndquadrant.com
|
|
|
|
| |
Per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch creates a new extension property, "trusted". An extension
that's marked that way in its control file can be installed by a
non-superuser who has the CREATE privilege on the current database,
even if the extension contains objects that normally would have to be
created by a superuser. The objects within the extension will (by
default) be owned by the bootstrap superuser, but the extension itself
will be owned by the calling user. This allows replicating the old
behavior around trusted procedural languages, without all the
special-case logic in CREATE LANGUAGE. We have, however, chosen to
loosen the rules slightly: formerly, only a database owner could take
advantage of the special case that allowed installation of a trusted
language, but now anyone who has CREATE privilege can do so.
Having done that, we can delete the pg_pltemplate catalog, moving the
knowledge it contained into the extension script files for the various
PLs. This ends up being no change at all for the in-core PLs, but it is
a large step forward for external PLs: they can now have the same ease
of installation as core PLs do. The old "trusted PL" behavior was only
available to PLs that had entries in pg_pltemplate, but now any
extension can be marked trusted if appropriate.
This also removes one of the stumbling blocks for our Python 2 -> 3
migration, since the association of "plpythonu" with Python 2 is no
longer hard-wired into pg_pltemplate's initial contents. Exactly where
we go from here on that front remains to be settled, but one problem
is fixed.
Patch by me, reviewed by Peter Eisentraut, Stephen Frost, and others.
Discussion: https://postgr.es/m/5889.1566415762@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an SQL-standard feature that allows creating columns that are
computed from expressions rather than assigned, similar to a view or
materialized view but on a column basis.
This implements one kind of generated column: stored (computed on
write). Another kind, virtual (computed on read), is planned for the
future, and some room is left for it.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b151f851-4019-bdb1-699e-ebab07d2f40a@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There seems little reason for the policy of throwing error if we
find a ref to something other than a hash or array. Recursively
look through the ref, instead. This makes the behavior in non-transform
cases comparable to what was already instantiated for jsonb_plperl.
Note that because we invoke any available transform function before
considering the ref case, it's up to each transform function whether
it wants to play along with this behavior or do something different.
Because the previous behavior was just to throw a useless error,
this seems unlikely to create any compatibility issues. Still, given
the lack of field complaints so far, seems best not to back-patch.
Discussion: https://postgr.es/m/28336.1528393969@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, committing or aborting inside a cursor loop was prohibited
because that would close and remove the cursor. To allow that,
automatically convert such cursors to holdable cursors so they survive
commits or rollbacks. Portals now have a new state "auto-held", which
means they have been converted automatically from pinned. An auto-held
portal is kept on transaction commit or rollback, but is still removed
when returning to the main loop on error.
This supports all languages that have cursor loop constructs: PL/pgSQL,
PL/Python, PL/Perl.
Reviewed-by: Ildus Kurbangaliev <i.kurbangaliev@postgrespro.ru>
|
|
|
|
|
|
|
|
|
|
| |
In a top-level CALL, the values of INOUT arguments will be returned as a
result row. In PL/pgSQL, the values are assigned back to the input
arguments. In other languages, the same convention as for return a
record from a function is used. That does not require any code changes
in the PL implementations.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar. This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.
This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL). There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql. Support for defining
procedures is available in all the languages supplied by the core
distribution.
While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
| |
In passing, don't insist on rsi->expectedDesc being set unless we
actually need it; this allows succeeding in a couple of cases where
PL/Perl functions returning setof composite would have failed before,
and makes the error message more apropos in other cases.
Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables. In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data. Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.
Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql. That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code. Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.
While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.
Thomas Munro with some work by Kevin Grittner (mostly docs)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were passing error message texts to croak() verbatim, which turns out
not to work if the text contains non-ASCII characters; Perl mangles their
encoding, as reported in bug #13638 from Michal Leinweber. To fix, convert
the text into a UTF8-encoded SV first.
It's hard to test this without risking failures in different database
encodings; but we can follow the lead of plpython, which is already
assuming that no-break space (U+00A0) has an equivalent in all encodings
we care about running the regression tests in (cf commit 2dfa15de5).
Back-patch to 9.1. The code is quite different in 9.0, and anyway it seems
too risky to put something like this into 9.0's final minor release.
Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane
|
|
|
|
| |
From: Dimitri Fontaine <dimitri@2ndQuadrant.fr>
|
|
|
|
|
| |
Set per file type attributes in .gitattributes to fine-tune whitespace
checks. With the associated cleanups, the tree is now clean for git
|
|
|
|
|
|
|
|
|
|
|
|
| |
"eval q{foo}" used to complain that the error was on line 2 of the eval'd
string, because eval internally tacked on "\n;" so that the end of the
erroneous command was indeed on line 2. But as of Perl 5.18 it more
sanely says that the error is on line 1. To avoid Perl-version-dependent
regression test results, use "eval q{foo;}" instead in the two places
where this matters. Per buildfarm.
Since people might try to use newer Perl versions with older PG releases,
back-patch as far as 9.0 where these test cases were added.
|
|
|
|
|
|
| |
Given what we now know about the cause of this bug, it seems like it'd
be a real good idea to include it in the plperl regression tests, so as
to catch any platform-specific cases where the code gets misoptimized.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
validate_plperl_function() supposed that it could free an old
plperl_proc_desc struct immediately upon detecting that it was stale.
However, if a plperl function is called recursively, this could result
in deleting the struct out from under an outer invocation, leading to
misbehavior or crashes. Add a simple reference-count mechanism to
ensure that such structs are freed only when the last reference goes
away.
Per investigation of bug #7516 from Marko Tiikkaja. I am not certain
that this error explains his report, because he says he didn't have
any recursive calls --- but it's hard to see how else it could have
crashed right there. In any case, this definitely fixes some problems
in the area.
Back-patch to all active branches.
|
|
|
|
|
|
|
|
|
|
| |
These only pass cleanly on UTF8 and SQL_ASCII encodings, besides the
Japanese encoding in which they were originally written, which is clearly
not good enough. Since the functionality they test has not ever been
tested from PL/Perl, the best answer seems to be to remove the new tests
completely.
Per buildfarm results and ensuing discussion.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When in SQL_ASCII encoding, strings passed around are not necessarily
UTF8-safe. We had already fixed this in some places, but it looks like
we missed some.
I had to backpatch Peter Eisentraut's a8b92b60 to 9.1 in order for this
patch to cherry-pick more cleanly.
Patch from Alex Hunsaker, tweaked by Kyotaro HORIGUCHI and myself.
Some desultory cleanup and comment addition by me, during patch review.
Per bug report from Christoph Berg in
20120209102116.GA14429@msgid.df7cb.de
|
|
|
|
| |
pgindent and perltidy should clean up the rest.
|
|
|
|
|
|
| |
Along the way, add a missing dependency in the GNUmakefile.
Alex Hunsaker, with a slight adjustment by me.
|
| |
|
|
|
|
|
|
| |
Amit Khandekar and Alex Hunsaker.
Backpatched to 9.1 where the problem first occurred.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch restores the pre-9.1 behavior that pl/perl functions returning
VOID ignore the result value of their last Perl statement. 9.1.0
unintentionally threw an error if the last statement returned a reference,
as reported by Amit Khandekar.
Also, make sure it works to return a string value for a composite type,
so long as the string meets the type's input format. We already allowed
the equivalent behavior for arrays, so it seems inconsistent to not allow
it for composites.
In addition, ensure we throw errors for attempts to return arrays or hashes
when the function's declared result type is not an array or composite type,
respectively. Pre-9.1 versions rather uselessly returned strings like
ARRAY(0x221a9a0) or HASH(0x221aa90), while 9.1.0 threw an error for the
hash case and returned a garbage value for the array case.
Also, clean up assorted grotty coding in Perl array conversion, including
use of a session-lifespan memory context to accumulate the array value
(resulting in session-lifespan memory leak on error), failure to apply the
declared typmod if any, and failure to detect some cases of non-rectangular
multi-dimensional arrays.
Alex Hunsaker and Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to just remember the GucSource, but saving GucContext too provides
a little more information --- notably, whether a SET was done by a
superuser or regular user. This allows us to rip out the fairly dodgy code
that define_custom_variable used to use to try to infer the context to
re-install a pre-existing setting with. In particular, it now works for
a superuser to SET a extension's SUSET custom variable before loading the
associated extension, because GUC can remember whether the SET was done as
a superuser or not. The plperl regression tests contain an example where
this is useful.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This variable provides only marginal error-prevention capability (since
it can only check the prefix of a qualified GUC name), and the consensus
is that that isn't worth the amount of hassle that maintaining the setting
creates for DBAs. So, let's just remove it.
With this commit, the system will silently accept a value for any qualified
GUC name at all, whether it has anything to do with any known extension or
not. (Unqualified names still have to match known built-in settings,
though; and you will get a WARNING at extension load time if there's an
unrecognized setting with that extension's prefix.)
There's still some discussion ongoing about whether to tighten that up and
if so how; but if we do come up with a solution, it's not likely to look
anything like custom_variable_classes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was broken in commit ef19dc6d39dd2490ff61489da55d95d6941140bf by
the Bunce/Hunsaker/Dunstan team, which moved the declaration from
plperl_create_sub to plperl_call_perl_trigger_func. This doesn't
actually work because the validator code would not find the variable
declared; and even if you manage to get past the validator, it still
doesn't work because get_sv("_TD", GV_ADD) doesn't have the expected
effect. The only reason this got beyond testing is that it only fails
in strict mode.
We need to declare it as a global just like %_SHARED; it is simpler than
trying to actually do what the patch initially intended, and is said to
have the same performance benefit.
As a more serious issue, fix $_TD not being properly local()ized,
meaning nested trigger functions would clobber $_TD.
Alex Hunsaker, per test report from Greg Mullane
|
|
|
|
|
|
|
|
|
|
|
|
| |
More generally, arrays are turned in Perl array references, and row and
composite types are turned into Perl hash references. This is done
recursively, in a way that's natural to every Perl programmer.
To avoid a backwards compatibility hit, the string representation of
each structure is also available if the function requests it.
Authors: Alexey Klyukin and Alex Hunsaker.
Some code cleanups by me.
|
| |
|
|
|
|
|
|
|
|
| |
As noted by Jan Urbanski, this flag is in fact needed to ensure that the
function's input/result conversion functions are set up as expected.
Add a regression test to discourage anyone from making same mistake
in future.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the SQL-standard concept of an INSTEAD OF trigger, which
is fired instead of performing a physical insert/update/delete. The
trigger function is passed the entire old and/or new rows of the view,
and must figure out what to do to the underlying tables to implement
the update. So this feature can be used to implement updatable views
using trigger programming style rather than rule hacking.
In passing, this patch corrects the names of some columns in the
information_schema.triggers view. It seems the SQL committee renamed
them somewhere between SQL:99 and SQL:2003.
Dean Rasheed, reviewed by Bernd Helmle; some additional hacking by me.
|
|
|
|
| |
at end of files.
|
|
|
|
| |
Author: Alex Hunsaker
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fundamentally insecure. Instead apply an opmask to the whole interpreter that
imposes restrictions on unsafe operations. These restrictions are much harder
to subvert than is Safe.pm, since there is no container to be broken out of.
Backported to release 7.4.
In releases 7.4, 8.0 and 8.1 this also includes the necessary backporting of
the two interpreters model for plperl and plperlu adopted in release 8.2.
In versions 8.0 and up, the use of Perl's POSIX module to undo its locale
mangling on Windows has become insecure with these changes, so it is
replaced by our own routine, which is also faster.
Nice side effects of the changes include that it is now possible to use perl's
"strict" pragma in a natural way in plperl, and that perl's $a and
$b variables now work as expected in sort routines, and that function
compilation is significantly faster.
Tim Bunce and Andrew Dunstan, with reviews from Alex Hunsaker and
Alexey Klyukin.
Security: CVE-2010-1169
|
|
|
|
| |
message. Error noticed by Tom and buildfarm member kite.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prevent use of another buggy version of Safe.pm.
Only register the exit handler if we have successfully created an interpreter.
Change log level of perl warnings from NOTICE to WARNING.
The infrastructure is there if in future we decide to allow
DBAs to specify extra modules that will be allowed in trusted code.
However, for now the relevant variables are declared as lexicals
rather than as package variables, so that they are not (or should not be)
accessible.
Mostly code from Tim Bunce, reviewed by Alex Hunsaker, with some
tweaks by me.
|
|
|
|
| |
patch
|
|
|
|
| |
language-specific startup. Rename recently added plperl.on_perl_init to plperl.on_init. Also, code cleanup for utf8 hack. Patch from Tim Bunce, reviewed by Alex Hunsaker.
|
|
|
|
| |
Tim Bunce.
|
|
|
|
|
|
|
|
| |
on load. Also, handle END blocks in plperl.
Database access is disallowed during both these operations, although it might be allowed in END blocks in future.
Patch from Tim Bunce.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Allow (ineffective) use of 'require' in plperl
If the required module is not already loaded then it dies.
So "use strict;" now works in plperl.
- Pre-load the feature module if perl >= 5.10.
So "use feature :5.10;" now works in plperl.
- Stored procedure subs are now given names.
The names are not visible in ordinary use, but they make
tools like Devel::NYTProf and Devel::Cover much more useful.
- Simplified and generalized the subroutine creation code.
Now one code path for generating sub source code, not four.
Can generate multiple 'use' statements with specific imports
(which handles plperl.use_strict currently and can easily
be extended to handle a plperl.use_feature=':5.12' in future).
- Disallows use of Safe version 2.20 which is broken for PL/Perl.
http://rt.perl.org/rt3/Ticket/Display.html?id=72068
- Assorted minor optimizations by pre-growing data structures.
Patch from Tim Bunce, reviewed by Alex Hunsaker.
|