| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
postgresql-7.3.3/src/interfaces/python/pg.py.
_quote() function fails due to integer overflow if input d is larger
than max integer.
In the case where the column type is "BIGINT", the input d may very well
be larger than max integer while its type, t, is labeled 'int'.
The conversion on line 19, return "%d" % int(d), will fail due to
"OverflowError: long int too large to convert to int".
Please describe a way to repeat the problem. Please try to provide a
concise reproducible example, if at all possible:
----------------------------------------------------------------------
[1] create a table with a column type 'BIGINT'.
[2] use pg.DB.insert() to insert a value that is larger than max integer
If you know how this problem might be fixed, list the solution below:
---------------------------------------------------------------------
Just changing the conversion at line 19 of pg.py to long(d) instead of
int(d) should fix it. The following is a patch:
Chih-Hao Huang
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
--- setup.py~ Tue Mar 19 08:21:14 2002
+++ setup.py Wed May 14 15:10:30 2003
@@ -30,8 +30,8 @@
optional_libs=[ 'libpqdll', 'wsock32', 'advapi32' ]
data_files = [ 'libpq.dll' ]
else:
- include_dirs=['/usr/include/pgsql']
- library_dirs=['usr/lib/pgsql']
+ include_dirs=['../../include','../libpq','/usr/include/pgsql']
+ library_dirs=['../libpq','/usr/lib/pgsql']
optional_libs=['pq']
data_files = []
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
George Young
|
|
|
|
|
|
|
|
|
|
|
| |
of an index can now be a computed expression instead of a simple variable.
Restrictions on expressions are the same as for predicates (only immutable
functions, no sub-selects). This fixes problems recently introduced with
inlining SQL functions, because the inlining transformation is applied to
both expression trees so the planner can still match them up. Along the
way, improve efficiency of handling index predicates (both predicates and
index expressions are now cached by the relcache) and fix 7.3 oversight
that didn't record dependencies of predicate expressions.
|
|
|
|
| |
it, and map that to close() on Unix.
|
|
|
|
| |
Keep PQfreeNotify() around for binary compatibility.
|
|
|
|
|
| |
configure under native Windows (MinGW that is), but you won't get very far
compiling yet. The dynaloader files are from Jan Wieck's patch set.
|
|
|
|
|
|
| |
PostgreSQL source code.
Neil Conway
|
|
|
|
|
| |
query string. This fixes a bug where bool types sometimes returned with
a string that could not be dropped into a query.
|
|
|
|
| |
here, not -1.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
>
> In pg.py the attributes of DB are defined as being the same as
> the attributes of the corresponding pgobject "db", using the following
...
> The problem is that the attributes of db (which are read only)
> are not static (they are actually function calls to PostgreSQL),
> especially "status" and "error", but those attributes are copied
> and this is done only once when initializing the DB object.
>
> So, in effect, only the attribute "db.error" of a DB instance
> will be updated, but not the attribute "error". Same with "status".
> Don't copy the (read only) attributes of the pgobject to the
> DB object, but only the methods, and all of them, like this:
>
> --------------- change in pg.py ------------------
> # Create convience methods, in a way that is still overridable.
> for e in self.db.__methods__:
> setattr(self, e, getattr(self.db, e))
> ----------------------------------------------------
>
> Furthermore, make an addition to the documentation of the
> DB wrapper class (i.e. in pygresql-pg-db.html):
> After the sentence "All pgobject methods are included in this class also."
> add the following sentence "The pgobject read-only attributes can be
> accessed py adding the prefix 'db.' to them."
Christoph Zwerschke
|
| |
|
|
|
|
|
| |
logic carefully and I am sure that the test against n happens after it
is assigned to.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
=====================
I suggested an improvement of the inserttable in the PyGreSQL interface
already in January, but seemingly it was never implemented. I was told this
is the right place to get patches in for PyGreSQL, so I'm reposting my patch
here.
I consider the inserttable methode essential in populating the database
because of its benefits in performance compared to insert, so I think this
patch is quite essential. The attachment is an improved version of the
corresponding pg_inserttable function in pgmodule.c, which fixes the
following problems:
* The function raised exceptions because PyList_GetItem was used beyond the
size of the list. This was checked by comparing the result with NULL, but
the exception was not cleaned up, which could result in mysterious errors in
the following Python code. Instead of clearing the exception using
PyErr_Clear or something like that, I avoided throwing the exception at all
by at first requesting the size of the list. Using this opportunity, I also
checked the uniformity of the size of the rows passed in the lists/tuples.
The function also accepts (and silently ignores) empty lists and sublists.
* Python "None" values are now accepted and properly converted to PostgreSQL
NULL values
* The function now generates an error message in case of a line buffer
overflow
* It copes with tabulators, newlines and backslashes in strings now
* Rewrote the buffer filling code which should now run faster by avoiding
unnecessary string copy operations forth and back
Christoph Zwerschke
|
| |
|
|
|
|
| |
that field so that existing programs don't break.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
raises pgdb.DatabaseError when any of the fetch*
methods was invoked but previous call to execute* did
not produce any result set or no call was issued yet.
Also, raises pgdb.NotSupportedError when .nextset() is
invoked, instead of NameError.
This behaviour complies with DB-API 2.0.
Thanks for your work!
Timur Irmatov.
|
|
|
|
|
|
|
|
|
| |
used for the primary key lookup. This will prevent a database lookup
for each connection object that gets created. This could be a significant
optimization on a busy system.
Similarly, the get_attnames method allows for the attributes dictionary
to be installed directly.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
debug output is managed. The user can continue to use the current method
of passing a formatting string to have a replacement done and output will
be sent to the standard output exactly as it did before. In addition they
can set it to a file object, sys.stderr for example, and the query string
will be printed to it. Thay can also set it to a method (function) and the
query string will be passed to that method giving them the maximum flexibility
to do whatever they want with the query string.
I will be working with the PyGreSQL documentation shortly and at that time
will properly document this feature.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-hackers a couple days ago.
Notes/caveats:
- added regression tests for the new functionality, all
regression tests pass on my machine
- added pg_dump support
- updated PL/PgSQL to support per-statement triggers; didn't
look at the other procedural languages.
- there's (even) more code duplication in trigger.c than there
was previously. Any suggestions on how to refactor the
ExecXXXTriggers() functions to reuse more code would be
welcome -- I took a brief look at it, but couldn't see an
easy way to do it (there are several subtly-different
versions of the code in question)
- updated the documentation. I also took the liberty of
removing a big chunk of duplicated syntax documentation in
the Programmer's Guide on triggers, and moving that
information to the CREATE TRIGGER reference page.
- I also included some spelling fixes and similar small
cleanups I noticed while making the changes. If you'd like
me to split those into a separate patch, let me know.
Neil Conway
|
|
|
|
| |
editing.
|
| |
|
|
|
|
|
|
|
| |
recently. I just ran into it while running a set of python test scripts,
and I'm not sure who the normal maintainer is for interfaces/python.
John Nield
|
|
|
|
|
|
|
| |
since we moved from an implicate to explicate implementation.
Greg Copeland
|
|
|
|
|
|
|
|
|
|
|
| |
syscat.py scripts were both modified. pg.py uses it to cache a list of
pks (which is seemingly does for every db connection) and various
attributes. syscat uses it to walk the list of system tables and
queries the various attributes from these tables.
In both cases, it seemingly makes sense to apply what you've requested.
Greg Copeland
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
saw a fix offered up. Since I'm gearing up to use Postgres and Python
soon, I figured I'd have a hand at trying to get this sucker addressed.
Apologies if this has already been plugged. I looked in the archives
and never saw a response.
At any rate, I must admit I don't think I fully understand the
implications of some of the changes I made even though they appear to be
straight forward. We all know the devil is in the details. Anyone more
knowledgeable is requested to review my changes. :(
I also updated the advanced.py script in a somewhat nonsensical fashion
to make use of an int8 field in an effort to test this change. It seems
to run okay, however, this is by no means an all exhaustive test. So,
it's possible that a bumpy road may lay ahead for some. On the other
hand...overflows (hopefully) previously lurked (long -> int conversion).
Greg Copeland
|
|
|
|
|
|
|
|
| |
with the Cursor object's fetchmany() method. The API and
inline documentation state that the default is 1. It
currently defaults to 5.
Patrick Macdonald
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Slackware 8), and perhaps on other Pythons, haven't checked. Something in
the _pg.connect() call isn't working. I think the problem stems from the
fact that 'host' is a named parameter of both _pg.connect and pgdb.connect,
and so Python treats it as a variable assignment, not a named parameter.
Uses non-named parameters.
Andrew Johnson
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
entries, per pghackers discussion. This fixes aggregates to live in
namespaces, and also simplifies/speeds up lookup in parse_func.c.
Also, add a 'proimplicit' flag to pg_proc that controls whether a type
coercion function may be invoked implicitly, or only explicitly. The
current settings of these flags are more permissive than I would like,
but we will need to debate and refine the behavior; for now, I avoided
breaking regression tests as much as I could.
|
| |
|
| |
|
|
|
|
|
| |
- Use PyObject_Del() rather than macro version
- Check version and drop back to PyMem_Del() for older systems.
|
|
|
|
|
| |
compiled with --with-pymalloc. This change fixes that. Thanks to
Dave Wallace <dwallace@udel.edu>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
>
> I am running Python 1.5.
Therein lies the problem... :)
Since it appears you have the requirement of supporting old python
versions, attached is just the pgdb.py part of the patch (with a fix for
DateTime handling). It has the same functionality but certainly won't be
quite as fast. Given the absence of _PyString_Join in python1.5, it's a
pain to get the C variants working for all versions. The pgdb.py patch
does leaves the hooks in, should someone wish to do the optimization at a
later point.
Elliot Lee
|
|
|
|
|
|
|
|
| |
Elliot Lee wrote:
> This patch to the python bindings adds C versions of the often-used
query
> args quoting routines, as well as support for quoting lists e.g.
> dbc.execute("SELECT * FROM foo WHERE blah IN %s", ([1,2,3],))
|
|
|
|
|
|
|
| |
query args quoting routines, as well as support for quoting lists e.g.
dbc.execute("SELECT * FROM foo WHERE blah IN %s", ([1,2,3],))
Elliot Lee
|
| |
|
|
|
|
|
|
|
| |
the latest version and I wanted to make sure that there was a clean release.
I also change the build files as I discussed in my letter of Nov 6, 2001. At
the time I was asked to hold off until after the release.
|
|
|
|
| |
the interactive docs.
|
|
|
|
| |
initdb/regression tests pass.
|
|
|
|
| |
of the documentation in preparation for upcoming release.
|
| |
|
|
|
|
| |
make it clearer that d was the argument to the format operator.
|