| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
| |
on connection. This patch changes it to use PQconnectdb rather than
{fe_setauthsvc,PQsetdb}. This still isn't the complete solution, as
there
is no provision for user,password in class PgEnv, but it does get rid of
the error message. Tested with gcc version egcs-2.91.60 19981201
(egcs-1.1.1 release) under NetBSD-1.3K/i386.
Cheers,
Patrick Welche
|
| |
|
| |
|
|
|
|
| |
real affect now.
|
|
|
|
| |
definition of numeric_in.
|
|
|
|
| |
on queries involving UNION, EXCEPT, INTERSECT.
|
|
|
|
|
|
| |
cause troubles. See
Message-Id: <199905090312.MAA00466@ext16.sra.co.jp>
for more details.
|
|
|
|
| |
Fixed by Hiroshi.
|
| |
|
|
|
|
| |
fopen(), instead of going through fd.c ... naughty naughty.
|
|
|
|
| |
code, instead of not-very-bulletproof stuff they had before.
|
|
|
|
|
|
|
|
|
| |
files to be closed automatically at transaction abort or commit, should
they still be open. Also close any still-open stdio files allocated with
AllocateFile at abort/commit. This should eliminate problems with leakage
of file descriptors after an error. Also, put in some primitive buffered-IO
support so that psort.c can use virtual files without severe performance
penalties.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
"SYSTEM", and unpack the files in the uuencoded .tar.gz file at the end in
src/test/regress so that the int2, int4 and geometry tests pass on NetBSD/i386.
They just fail on different wording of error messages and eg printing "0"
rather than "-0". At a guess the same will be true for the other NetBSD ports,
but I can't test them.
Cheers,
Patrick
|
|
|
|
|
| |
with
"SYSTEM", Patrick Welche
|
|
|
|
| |
Get rid of Extend lock mode.
|
| |
|
|
|
|
| |
that led to CASE expressions not working very well in joined queries.
|
|
|
|
|
|
| |
meaning that this failed:
select proname,typname,prosrc from pg_proc,pg_type
where proname = 'float8' and pg_proc.proargtypes[0] = pg_type.oid;
|
|
|
|
|
|
|
|
|
|
| |
about certain to fail anytime it decided the relation to be hashed was
too big to fit in memory --- the code for 'batching' a series of hashjoins
had multiple errors. I've fixed the easier problems. A remaining big
problem is that you can get 'hashtable out of memory' if the code's
guesstimate about how much overflow space it will need turns out wrong.
That will require much more extensive revisions to fix, so I'm committing
these fixes now before I start on that problem.
|
|
|
|
|
|
|
|
|
|
| |
arrayfuncs.patch fixes a small bug in my previous patches for
arrays
array-regress.patch adds _bpchar and _varchar to regression tests
--
Massimo Dal Zotto
|
| |
|
| |
|
|
|
|
|
| |
Original code used float8out(), but the resulting exponential notation
was not handled (e.g. '3E9' was decoded as '3').
|
| |
|
|
|
|
|
| |
nodes with HAVING qualifier of upper plan. Have not seen any failures,
just being a little bit paranoid...
|
|
|
|
| |
gcc quite so unhappy.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
been applied. The patches are in the .tar.gz attachment at the end:
varchar-array.patch this patch adds support for arrays of bpchar() and
varchar(), which where always missing from postgres.
These datatypes can be used to replace the _char4,
_char8, etc., which were dropped some time ago.
block-size.patch this patch fixes many errors in the parser and other
program which happen with very large query statements
(> 8K) when using a page size larger than 8192.
This patch is needed if you want to submit queries
larger than 8K. Postgres supports tuples up to 32K
but you can't insert them because you can't submit
queries larger than 8K. My patch fixes this problem.
The patch also replaces all the occurrences of `8192'
and `1<<13' in the sources with the proper constants
defined in include files. You should now never find
8192 hardwired in C code, just to make code clearer.
--
Massimo Dal Zotto
|
|
|
|
|
| |
from EXCEPT/HAVING patch. Cases involving nontrivial GROUP BY expressions
now work again. Also, the code is at least somewhat better documented...
|
|
|
|
| |
the cost of reading the source data.
|
|
|
|
|
|
| |
to save a little bit of backend startup time. This way, the first
backend started after a VACUUM will rebuild the init file with up-to-date
statistics for the critical system indexes.
|
| |
|
|
|
|
| |
an identifier :-(. Sloppy transmission of a patch, likely.
|
|
|
|
| |
FATAL 1:btree: BTP_CHAIN flag was expected
|
|
|
|
| |
should be faster.
|
| |
|
| |
|
|
|
|
|
| |
This makes no difference to the optimizer, which has already decided what
it's gonna do, but it makes the output of EXPLAIN much more plausible.
|
|
|
|
|
| |
sometimes estimating an index scan of a table to be cheaper than a
sequential scan of the same tuples...
|
|
|
|
|
| |
from ever returning a path. This put a bit of a crimp in the system's
ability to generate intelligent merge-join plans...
|
|
|
|
| |
before going into queue behind person with higher piority.
|
|
|
|
| |
Jan
|
|
|
|
|
| |
that, but it'd be a New Feature, wouldn't it ... in the meantime,
avoiding a backend crash seems worthwhile.
|
|
|
|
| |
Things are better now.
|
|
|
|
| |
due to lack of check for recursing into a null subexpression.
|
|
|
|
| |
this is not revealed by any of our regression tests...
|
|
|
|
| |
Jan
|
| |
|