| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
This is claimed in the documentation but there was a no test case for
it.
Reported-by: Bogdan Grigorenko <gri.bogdan.2020@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/173543330569.680.6706329879058172623%40wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new variant of generated columns that are computed on read
(like a view, unlike the existing stored generated columns, which are
computed on write, like a materialized view).
The syntax for the column definition is
... GENERATED ALWAYS AS (...) VIRTUAL
and VIRTUAL is also optional. VIRTUAL is the default rather than
STORED to match various other SQL products. (The SQL standard makes
no specification about this, but it also doesn't know about VIRTUAL or
STORED.) (Also, virtual views are the default, rather than
materialized views.)
Virtual generated columns are stored in tuples as null values. (A
very early version of this patch had the ambition to not store them at
all. But so much stuff breaks or gets confused if you have tuples
where a column in the middle is completely missing. This is a
compromise, and it still saves space over being forced to use stored
generated columns. If we ever find a way to improve this, a bit of
pg_upgrade cleverness could allow for upgrades to a newer scheme.)
The capabilities and restrictions of virtual generated columns are
mostly the same as for stored generated columns. In some cases, this
patch keeps virtual generated columns more restricted than they might
technically need to be, to keep the two kinds consistent. Some of
that could maybe be relaxed later after separate careful
considerations.
Some functionality that is currently not supported, but could possibly
be added as incremental features, some easier than others:
- index on or using a virtual column
- hence also no unique constraints on virtual columns
- extended statistics on virtual columns
- foreign-key constraints on virtual columns
- not-null constraints on virtual columns (check constraints are supported)
- ALTER TABLE / DROP EXPRESSION
- virtual column cannot have domain type
- virtual columns are not supported in logical replication
The tests in generated_virtual.sql have been copied over from
generated_stored.sql with the keyword replaced. This way we can make
sure the behavior is mostly aligned, and the differences can be
visible. Some tests for currently not supported features are
currently commented out.
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Tested-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/a368248e-69e4-40be-9c07-6c3b5880b0a6@eisentraut.org
|
|
|
|
|
|
|
|
|
| |
These tests relied on both next() and __next__(), but only the former is
needed since Python 2 support has been removed, so let's simplify a bit
the tests.
Author: Erik Wienhold
Discussion: https://postgr.es/m/173209043143.2092749.13692266486972491694@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we recursed to a new call of the same function, with a different
coldeflist (AS clause), it would fail because the inner call would
overwrite the outer call's idea of what to return. This is vaguely
like 1d2fe56e4 and c5bec5426, but it's not due to any API decisions:
it's just that we computed the actual output rowtype at the start of
the call, and saved it in the per-procedure data structure. We can
fix it at basically zero cost by doing the computation at the end
of each call instead of the start.
It's not clear that there's any real-world use-case for such a
function, but given that it doesn't cost anything to fix,
it'd be silly not to.
Per report from Andreas Karlsson. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/1651a46d-3c15-4028-a8c1-d74937b54e19@proxel.se
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a plpython-language trigger caused another one to be invoked,
the "TD" dictionary created for the inner one would overwrite the
outer one's "TD" dictionary. This is more or less the same problem
that 1d2fe56e4 fixed for ordinary functions in plpython, so fix it
the same way, by saving and restoring "TD" during a recursive
invocation.
This fix makes an ABI-incompatible change in struct PLySavedArgs.
I'm not too worried about that because it seems highly unlikely that
any extension is messing with those structs. We could imagine doing
something weird to preserve nominal ABI compatibility in the back
branches, like keeping the saved TD object in an extra element of
namedargs[]. However, that would only be very nominal compatibility:
if anything *is* touching PLySavedArgs, it would likely do the wrong
thing due to not knowing about the additional value. So I judge it
not worth the ugliness to do something different there.
(I also changed struct PLyProcedure, but its added field fits
into formerly-padding space, so that should be safe.)
Per bug #18456 from Jacques Combrink. This bug is very ancient,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/3008982.1714853799@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PLy_elog() was not able to handle correctly cases where a SPI called
failed, which would fill in a DETAIL string able to trigger an
assertion. We may want to improve this infrastructure so as it is able
to provide any extra detail information provided by an error stack, but
this is left as a future improvement as it could impact existing error
stacks and any applications that depend on them. For now, the assertion
is removed and a regression test is added to cover the case of a failure
with a detail string.
This problem exists since 2bd78eb8d51c, so backpatch all the way down
with tweaks to the regression tests output added where required.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/18070-ab9c171cbf4ebb0f@postgresql.org
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Like plperl before f47004add, plpython wasn't being sufficiently
careful about checking that list-of-list structures represent
rectangular arrays, so that it would accept some cases in which
different parts of the "array" are nested to different depths.
This was exacerbated by Python's weak distinction between
sequences and lists, so that in some cases strings could get
treated as though they are lists (and burst into individual
characters) even though a different ordering of the upper-level
list would give a different result.
Some of this behavior was unreachable (without risking a crash)
before 81eaaf65e. It seems like a good idea to clean it all up
in the same releases, rather than shipping a non-crashing but
nonetheless visibly buggy behavior in the name of minimal change.
Hence, back-patch.
Per bug #17912 and further testing by Alexander Lakhin.
Discussion: https://postgr.es/m/17912-82ceed78731d9cdc@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If PLySequence_ToArray came across a zero-length sublist, it'd compute
the overall array size as zero, possibly leading to a memory clobber.
(This would likely qualify as a security bug, were it not that plpython
is an untrusted language already.)
I think there are other corner-case issues in this code as well, notably
that the error messages don't match the core code and for some ranges
of array sizes you'd get "invalid memory alloc request size" rather than
the intended message about array size.
Really this code has no business doing its own array size calculation
at all, so remove the faulty code in favor of using ArrayGetNItems().
Per bug #17912 from Alexander Lakhin. Bug seems to have come in with
commit 94aceed31, so back-patch to all supported branches.
Discussion: https://postgr.es/m/17912-82ceed78731d9cdc@postgresql.org
|
|
|
|
|
|
|
|
| |
PL/pgSQL was misspelled in a few places, so fix these.
Author: Zhang Mingli
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/1bd41572-9cd9-465e-9f59-ee45385e51b4@Spark
|
|
|
|
|
|
|
|
|
|
| |
Since 19252e8ec93 we reject Python 2 during build configuration. Now that the
dust on the buildfarm has settled, remove regression testing infrastructure
dealing with differing output between Python 2 / 3.
Reviewed-By: Peter Eisentraut <peter@eisentraut.org>
Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20211031184548.g4sxfe47n2kyi55r@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SPI_commit previously left it up to the caller to recover from any error
occurring during commit. Since that's complicated and requires use of
low-level xact.c facilities, it's not too surprising that no caller got
it right. Let's move the responsibility for cleanup into spi.c. Doing
that requires redefining SPI_commit as starting a new transaction, so
that it becomes equivalent to SPI_commit_and_chain except that you get
default transaction characteristics instead of preserving the prior
transaction's characteristics. We can make this pretty transparent
API-wise by redefining SPI_start_transaction() as a no-op. Callers
that expect to do something in between might be surprised, but
available evidence is that no callers do so.
Having made that API redefinition, we can fix this mess by having
SPI_commit[_and_chain] trap errors and start a new, clean transaction
before re-throwing the error. Likewise for SPI_rollback[_and_chain].
Some cleanup is also needed in AtEOXact_SPI, which was nowhere near
smart enough to deal with SPI contexts nested inside a committing
context.
While plperl and pltcl need no changes beyond removing their now-useless
SPI_start_transaction() calls, plpython needs some more work because it
hadn't gotten the memo about catching commit/rollback errors in the
first place. Such an error resulted in longjmp'ing out of the Python
interpreter, which leaks Python stack entries at present and is reported
to crash Python 3.11 altogether. Add the missing logic to catch such
errors and convert them into Python exceptions.
We are probably going to have to back-patch this once Python 3.11 ships,
but it's a sufficiently basic change that I'm a bit nervous about doing
so immediately. Let's let it bake awhile in HEAD first.
Peter Eisentraut and Tom Lane
Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com
Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of
OUT parameters, for procedures only. While that had some advantages
for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
disastrous from a number of other perspectives. Notably, since the
primary key of pg_proc is name + proargtypes, this made it possible to
have multiple procedures with identical names + input arguments and
differing output argument types. That would make it impossible to call
any one of the procedures by writing just NULL (or "?", or any other
data-type-free notation) for the output argument(s). The change also
seems likely to cause grave confusion for client applications that
examine pg_proc and expect the traditional definition of proargtypes.
Hence, revert the definition of proargtypes to what it was, and
undo a number of complications that had been added to support that.
To support the SQL-spec behavior of DROP PROCEDURE, when there are
no argmode markers in the command's parameter list, we perform the
lookup both ways (that is, matching against both proargtypes and
proallargtypes), succeeding if we get just one unique match.
In principle this could result in ambiguous-function failures
that would not happen when using only one of the two rules.
However, overloading of procedure names is thought to be a pretty
rare usage, so this shouldn't cause many problems in practice.
Postgres-specific code such as pg_dump can defend against any
possibility of such failures by being careful to specify argmodes
for all procedure arguments.
This also fixes a few other bugs in the area of CALL statements
with named parameters, and improves the documentation a little.
catversion bump forced because the representation of procedures
with OUT arguments changes.
Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Starting with Python 3.10, the stacktrace looks differently:
- PL/Python function "subtransaction_exit_subtransaction_in_with", line 3, in <module>
- s.__exit__(None, None, None)
+ PL/Python function "subtransaction_exit_subtransaction_in_with", line 2, in <module>
+ with plpy.subtransaction() as s:
Using try/except specifically makes the error look always the same.
(See https://github.com/python/cpython/pull/25719 for the discussion
of this change in Python.)
Author: Honza Horak <hhorak@redhat.com>
Discussion: https://www.postgresql.org/message-id/flat/853083.1620749597%40sss.pgh.pa.us
RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1959080
|
|
|
|
|
|
|
|
|
|
| |
Unlike for functions, OUT parameters for procedures are part of the
signature. Therefore, they have to be listed in pg_proc.proargtypes
as well as mentioned in ALTER PROCEDURE and DROP PROCEDURE.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/2b8490fe-51af-e671-c504-47359dc453c5@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change the exception syntax used in the tests to use the more current
except Exception as ex:
rather than the old
except Exception, ex:
Since support for Python <2.6 has been removed, all supported versions
now support the new style, and we can save one step in the Python 3
compatibility conversion.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/98b69261-298c-13d2-f34d-836fd9c29b21%402ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Supporting very old Python versions is a maintenance burden,
especially with the several variant test files to maintain for Python
<2.6.
Since we have dropped support for older OpenSSL versions in
7b283d0e1d1d79bf1c962d790c94d2a53f3bb38a, RHEL 5 is now effectively
desupported, and that was also the only mainstream operating system
still using Python versions before 2.6, so it's a good time to drop
those as well.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/98b69261-298c-13d2-f34d-836fd9c29b21%402ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an SQL-standard feature that allows creating columns that are
computed from expressions rather than assigned, similar to a view or
materialized view but on a column basis.
This implements one kind of generated column: stored (computed on
write). Another kind, virtual (computed on read), is planned for the
future, and some room is left for it.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b151f851-4019-bdb1-699e-ebab07d2f40a@2ndquadrant.com
|
|
|
|
|
|
|
| |
Python 3.7 removes the trailing comma in the repr() of
BaseException (see <https://bugs.python.org/issue30399>), leading to
test output differences. Work around that by composing the equivalent
test output in a more manual way.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, committing or aborting inside a cursor loop was prohibited
because that would close and remove the cursor. To allow that,
automatically convert such cursors to holdable cursors so they survive
commits or rollbacks. Portals now have a new state "auto-held", which
means they have been converted automatically from pinned. An auto-held
portal is kept on transaction commit or rollback, but is still removed
when returning to the main loop on error.
This supports all languages that have cursor loop constructs: PL/pgSQL,
PL/Python, PL/Perl.
Reviewed-by: Ildus Kurbangaliev <i.kurbangaliev@postgrespro.ru>
|
|
|
|
|
|
|
|
|
|
| |
In a top-level CALL, the values of INOUT arguments will be returned as a
result row. In PL/pgSQL, the values are assigned back to the input
arguments. In other languages, the same convention as for return a
record from a function is used. That does not require any code changes
in the PL implementations.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
plpython_error_callback() reported the name of the function associated
with the topmost PL/Python execution context. This was not merely
wrong if there were nested PL/Python contexts, but it risked a core
dump if the topmost one is an inline code block rather than a named
function. That will have proname = NULL, and so we were passing a NULL
pointer to snprintf("%s"). It seems that none of the PL/Python-testing
machines in the buildfarm will dump core for that, but some platforms do,
as reported by Marina Polyakova.
Investigation finds that there actually is an existing regression test
that used to prove that the behavior was wrong, though apparently no one
had noticed that it was printing the wrong function name. It stopped
showing the problem in 9.6 when we adjusted psql to not print CONTEXT
by default for NOTICE messages. The problem is masked (if your platform
avoids the core dump) in error cases, because PL/Python will throw away
the originally generated error info in favor of a new traceback produced
at the outer level.
Repair by using ErrorContextCallback.arg to pass the correct context to
the error callback. Add a regression test illustrating correct behavior.
Back-patch to all supported branches, since they're all broken this way.
Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
|
|
|
|
|
|
| |
Commit 8561e4840c81f7e345be2df170839846814fa004 neglected to handle
older Python versions that don't support the "with" statement. So write
the tests in a way that older versions can handle as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new object type "procedure" that is similar to a function
but does not have a return type and is invoked by the new CALL statement
instead of SELECT or similar. This implementation is aligned with the
SQL standard and compatible with or similar to other SQL implementations.
This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
as ALTER/DROP ROUTINE that can refer to either a function or a
procedure (or an aggregate function, as an extension to SQL). There is
also support for procedures in various utility commands such as COMMENT
and GRANT, as well as support in pg_dump and psql. Support for defining
procedures is available in all the languages supplied by the core
distribution.
While this commit is mainly syntax sugar around existing functionality,
future features will rely on having procedures as a separate object
type.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix PL/Python so that it can handle domains over composite, and so that
it enforces domain constraints correctly in other cases that were not
always done properly before. Notably, it didn't do arrays of domains
right (oversight in commit c12d570fa), and it failed to enforce domain
constraints when returning a composite type containing a domain field,
and if a transform function is being used for a domain's base type then
it failed to enforce domain constraints on the result. Also, in many
places it missed checking domain constraints on null values, because
the plpy_typeio code simply wasn't called for Py_None.
Rather than try to band-aid these problems, I made a significant
refactoring of the plpy_typeio logic. The existing design of recursing
for array and composite members is extended to also treat domains as
containers requiring recursion, and the APIs for the module are cleaned
up and simplified.
The patch also modifies plpy_typeio to rely on the typcache more than
it did before (which was pretty much not at all). This reduces the
need for repetitive lookups, and lets us get rid of an ad-hoc scheme
for detecting changes in composite types. I added a couple of small
features to typcache to help with that.
Although some of this is fixing bugs that long predate v11, I don't
think we should risk a back-patch: it's a significant amount of code
churn, and there've been no complaints from the field about the bugs.
Tom Lane, reviewed by Anthony Bykov
Discussion: https://postgr.es/m/24449.1509393613@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables. In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data. Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.
Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql. That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code. Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.
While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.
Thomas Munro with some work by Kevin Grittner (mostly docs)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of
plan = plpy.prepare(...)
res = plpy.execute(plan, ...)
you can now write
plan = plpy.prepare(...)
res = plan.execute(...)
or even
res = plpy.prepare(...).execute(...)
and similarly for the cursor() method.
This is more in object oriented style, and makes the hybrid nature of
the existing execute() function less confusing.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
|
|
| |
There is no specific reason for this right now, but keeping support for
old Python versions around indefinitely increases the maintenance
burden. The oldest supported Python version is now Python 2.4, which is
still shipped in RHEL/CentOS 5 by default.
In configure, add a check for the required Python version and give a
friendly error message for an old version, instead of relying on an
obscure build error later on.
|
|
|
|
| |
It makes it readable again and makes merges more manageable.
|
|
|
|
|
|
|
|
| |
Turns out that the output format of Python Decimal isn't totally platform-
independent either. There are other tests for multi-dimensional arrays,
so rather than try to fix this test case, just remove it.
Per buildfarm member prairiedog.
|
|
|
|
|
|
|
| |
The number of decimals printed for floats varied in this test case, as
noted by several buildfarm members. There's nothing special about floats
and arrays in the code being tested, so replace the floats with numerics to
make the output platform-independent.
|
|
|
|
|
|
|
|
|
| |
That used to be accepted, so let's try to give a hint to users on why
their PL/python functions no longer work.
Reviewed by Pavel Stehule.
Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multi-dimensional arrays can now be used as arguments to a PL/python function
(used to throw an error), and they can be returned as nested Python lists.
This makes a backwards-incompatible change to the handling of composite
types in arrays. Previously, you could return an array of composite types
as "[[col1, col2], [col1, col2]]", but now that is interpreted as a two-
dimensional array. Composite types in arrays must now be returned as
Python tuples, not lists, to resolve the ambiguity. I.e. "[(col1, col2),
(col1, col2)]".
To avoid breaking backwards-compatibility, when not necessary, () is still
accepted for arrays at the top-level, but it is always treated as a
single-dimensional array. Likewise, [] is still accepted for composite types,
when they are not in an array. Update the documentation to recommend using []
for arrays, and () for composite types, with a mention that those other things
are also accepted in some contexts.
This needs to be mentioned in the release notes.
Alexey Grishchenko, Dave Cramer and me. Reviewed by Pavel Stehule.
Discussion: <CAH38_tmbqwaUyKs9yagyRra=SMaT45FPBxk1pmTYcM0TyXGG7Q@mail.gmail.com>
|
|
|
|
|
|
|
| |
Similar to 0137caf273, this makes contrib and pl tests less dependant on
hash-table order. After this commit, at least some order affecting
changes to execGrouping.c don't result in regression test changes
anymore.
|
|
|
|
|
| |
Format the example and test code more to Python style standards.
Improve whitespace. Improve documentation formatting.
|
|
|
|
|
| |
Rename schema -> schema_name etc. to remain consistent with C API and
PL/pgSQL.
|
|
|
|
|
|
|
| |
In commit 5c3c3cd0a3046339597a03bc708cb5530dc07059, the new tests were
apparently just dumped into the first convenient file. Move them to a
separate file dedicated to testing that functionality and leave the
plpython_test test to test basic functionality, as it did before.
|
|
|
|
|
|
| |
Per buildfarm.
Pavel Stehule
|
|
|
|
|
|
|
|
| |
Patch adds a new, more rich, way to emit error message or exception from
PL/Pythonu code.
Author: Pavel Stehule
Reviewers: Catalin Iacob, Peter Eisentraut, Jim Nasby
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PL/Python failed if a PL/Python function was invoked recursively via SPI,
since arguments are passed to the function in its global dictionary
(a horrible decision that's far too ancient to undo) and it would delete
those dictionary entries on function exit, leaving the outer recursion
level(s) without any arguments. Not deleting them would be little better,
since the outer levels would then see the innermost level's arguments.
Since PL/Python uses ValuePerCall mode for evaluating set-returning
functions, it's possible for multiple executions of the same SRF to be
interleaved within a query. PL/Python failed in such a case, because
it stored only one iterator per function, directly in the function's
PLyProcedure struct. Moreover, one interleaved instance of the SRF
would see argument values that should belong to another.
Hence, invent code for saving and restoring the argument entries. To fix
the recursion case, we only need to save at recursive entry and restore
at recursive exit, so the overhead in non-recursive cases is negligible.
To fix the SRF case, we have to save when suspending a SRF and restore
when resuming it, which is potentially not negligible; but fortunately
this is mostly a matter of manipulating Python object refcounts and
should not involve much physical data copying.
Also, store the Python iterator and saved argument values in a structure
associated with the SRF call site rather than the function itself. This
requires adding a memory context deletion callback to ensure that the SRF
state is cleaned up if the calling query exits before running the SRF to
completion. Without that we'd leak a refcount to the iterator object in
such a case, resulting in session-lifespan memory leakage. (In the
pre-existing code, there was no memory leak because there was only one
iterator pointer, but what would happen is that the previous iterator
would be resumed by the next query attempting to use the SRF. Hardly the
semantics we want.)
We can buy back some of whatever overhead we've added by getting rid of
PLy_function_delete_args(), which seems a useless activity: there is no
need to delete argument entries from the global dictionary on exit,
since the next time anyone would see the global dict is on the next
fresh call of the PL/Python function, at which time we'd overwrite those
entries with new arg values anyway.
Also clean up some really ugly coding in the SRF implementation, including
such gems as returning directly out of a PG_TRY block. (The only reason
that failed to crash hard was that all existing call sites immediately
exited their own PG_TRY blocks, popping the dangling longjmp pointer before
there was any chance of it being used.)
In principle this is a bug fix; but it seems a bit too invasive relative to
its value for a back-patch, and besides the fix depends on memory context
callbacks so it could not go back further than 9.5 anyway.
Alexey Grishchenko and Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A function name that's double-quoted in SQL can contain almost any
characters, but we were using that name directly as part of the name
generated for the Python-level function, and Python doesn't like
anything that isn't pretty much a standard identifier. To fix,
replace anything that isn't an ASCII letter or digit with an underscore
in the generated name. This doesn't create any risk of duplicate Python
function names because we were already appending the function OID to
the generated name to ensure uniqueness. Per bug #13960 from Jim Nasby.
Patch by Jim Nasby, modified a bit by me. Back-patch to all
supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PLyString_ToComposite() blithely overwrote proc->result.out.d, even though
for a composite result type the other union variant proc->result.out.r is
the one that should be valid. This could result in a crash if out.r had
in fact been filled in (proc->result.is_rowtype == 1) and then somebody
later attempted to use that data; as per bug #13579 from Paweł Michalak.
Just to add insult to injury, it didn't work for RECORD results anyway,
because record_in() would refuse the case.
Fix by doing the I/O function lookup in a local PLyTypeInfo variable,
as we were doing already in PLyObject_ToComposite(). This is not a great
technique because any fn_extra data allocated by the input function will
be leaked permanently (thanks to using TopMemoryContext as fn_mcxt).
But that's a pre-existing issue that is much less serious than a crash,
so leave it to be fixed separately.
This bug would be a potential security issue, except that plpython is
only available to superusers and the crash requires coding the function
in a way that didn't work before today's patches.
Add regression test cases covering all the supported methods of converting
composite results.
Back-patch to 9.1 where the faulty coding was introduced.
|
|
|
|
|
|
|
|
|
| |
The error message wording for AttributeError has changed in Python 3.5.
For the plpython_error test, add a new expected file. In the
plpython_subtransaction test, we didn't really care what the exception
is, only that it is something coming from Python. So use a generic
exception instead, which has a message that doesn't vary across
versions.
|
|
|
|
|
|
|
| |
PL/Python uses str() to convert Python values back to PostgreSQL, but
str() is lossy for float values, so use repr() instead in that case.
Author: Marko Kreen <markokr@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow PL/Python functions to return arrays of composite types.
Also, fix the restriction that plpy.prepare/plpy.execute couldn't
handle query parameters or result columns of composite types.
In passing, adopt a saner arrangement for where to release the
tupledesc reference counts acquired via lookup_rowtype_tupdesc.
The callers of PLyObject_ToCompositeDatum were doing the lookups,
but then the releases happened somewhere down inside subroutines
of PLyObject_ToCompositeDatum, which is bizarre and bug-prone.
Instead release in the same function that acquires the refcount.
Ed Behn and Ronan Dunklau, reviewed by Abhijit Menon-Sen
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test previously used a data value containing U+0080, and would
therefore fail if the database encoding didn't have an equivalent to
that; which only about half of our supported server encodings do.
We could fall back to using some plain-ASCII character, but that seems
like it's losing most of the point of the test. Instead switch to using
U+00A0 (no-break space), which translates into all our supported encodings
except the four in the EUC_xx family.
Per buildfarm testing. Back-patch to 9.1, which is as far back as this
test is expected to succeed everywhere. (9.0 has the test, but without
back-patching some 9.1 code changes we could not expect to get consistent
results across platforms anyway.)
|
|
|
|
|
|
| |
The error test case in the plpython_do test resulted in a slightly
different error message with Python 3.4. So pick a different way to
test it that avoids that and is perhaps also a bit clearer.
|
|
|
|
|
|
|
| |
We had coverage for functions returning setof a named composite type,
but not for anonymous records, which is a somewhat different code path.
In view of recent crash report from Sergey Konoplev, this seems worth
testing, though I doubt there's any deterministic bug here today.
|
|
|
|
|
|
|
|
|
|
|
| |
Domains over arrays are now converted to/from python lists when passed as
arguments or return values. Like regular arrays.
This has some potential to break applications that rely on the old behavior
that they are passed as strings, but in practice there probably aren't many
such applications out there.
Rodolfo Campero
|
|
|
|
|
|
|
| |
Similar to 2cfb1c6f77734db81b6e74bcae630f93b94f69be, the order in which
dictionary elements are printed is not reliable. This reappeared in the
tests of the string representation of result objects. Reduce the test
case to one result set column so that there is no question of order.
|