| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
| |
pg_char_to_encoding() in multibyte disbaled case so that it does not
throw an error, rather return HARD CODED default value (currently SQL_ASCII).
This would solve the "non-mb backend vs. mb-enabled frontend" problem.
|
|
|
|
|
|
|
|
|
|
|
|
| |
cleanup, ie, as soon as we have caught the longjmp. This ensures that
current context will be a valid context throughout error cleanup. Before
it was possible that current context was pointing at a context that would
get deleted during cleanup, leaving any subsequent pallocs in deep
trouble. I was able to provoke an Assert failure when compiled with
asserts + -DCLOBBER_FREED_MEMORY, if I did something that would cause
an error to be reported by the backend large-object code, because indeed
that code operates in a context that gets deleted partway through xact
abort --- and CurrentMemoryContext was still pointing at it! Boo hiss.
|
|
|
|
| |
loops, but just arbitrarily failing at 1000 locks.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
because StatFp never got set in that case. Set it immediately before
use to eliminate such problems.
|
|
|
|
|
| |
field value being displayed; produced coredump instead of the expected
<NULL> display.
|
|
|
|
|
| |
thus causing failure if one sub-select had resjunk entries that the other
did not (cf. bug report from Espinosa 4/27/00).
|
| |
|
|
|
|
|
|
|
|
| |
cases where joinclauses were present but some joins have to be made
by cartesian-product join anyway. An example is
SELECT * FROM a,b,c WHERE (a.f1 + b.f2 + c.f3) = 0;
Even though all the rels have joinclauses, we must join two of them
in cartesian style before we can use the join clause...
|
|
|
|
|
| |
that might be hanging about. Now it does ... amazing nobody noticed
this before ...
|
|
|
|
|
| |
not cause any compatibility problems because stored rules don't contain
plan nodes --- in fact, we don't even have a readfunc for Unique nodes.
|
|
|
|
|
|
|
| |
than BIND_DEFERRED. That way, if the loaded library has unresolved
references, shl_load fails cleanly. As we had it, shl_load would
succeed and then the dynlinker would call abort() when we try to call
into the loaded library. abort()ing a backend is uncool.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
a new JDBC Makefile here by accident)
Jan
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
that will actually work on the column datatype.
|
|
|
|
| |
to a temp file.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
always failed if Perl makefile's INSTALLSITELIB variable was specified
in terms of another variable. Fix by adding an echo-installdir target
to the Perl makefile, which the upper-level Makefile can invoke.
|
|
|
|
|
| |
specified index access method. Clean up wording of some existing error
messages, too.
|
|
|
|
| |
fsync settings, so the -F option no longer needs to be treated as secure.
|
|
|
|
|
|
|
|
|
|
|
| |
libpq++.h contained copies of the class declarations in the other libpq++
include files, which was bogus enough, but the declarations were not
completely in step with the real declarations. Remove these in favor
of including the headers with #include. Make PgConnection destructor
virtual (not absolutely necessary, but seems like a real good idea
considering the number of subclasses derived from it). Give all classes
declared private copy constructors and assignment operators, to prevent
compiler from thinking it can copy these objects safely.
|
|
|
|
| |
not a bare database name.
|
|
|
|
|
|
|
| |
compiler than the one selected to build Postgres with. It was trying
to feed Postgres-compiler switches to Tcl's compiler. (Seen this before
with the perl5 interface...) Fix to use only CFLAGS taken from Tcl's
configure information, plus -I which is pretty universal.
|
|
|
|
|
|
| |
unless you feed it -Aa or -Ae switch. Autoconf does not know about this,
but we can fix it in the hpux_cc template file. I knew templates were
good for something ;-)
|
|
|
|
|
| |
gcc doesn't think these are a problem, but somewhere out there is a
compiler that will spit up.
|
|
|
|
| |
'Twas my fault, I think.
|
| |
|
|
|
|
| |
know if that case ever breaks again...
|
|
|
|
|
|
|
| |
to give wrong results: it should be looking at inJoinSet not inFromCl.
Also, make 'modified' flag be local to ApplyRetrieveRule: we should
append a rule's quals to the query iff that particular rule applies,
not if we have fired any previously-considered rule for the query!
|
| |
|
|
|
|
| |
as 'alphaev5', cf report from Stepanov 13-Apr-00.
|
|
|
|
|
|
| |
(SELECT FROM table*). Cause was reference to 'eref' field of an RTE,
which is null in an RTE loaded from a stored rule parsetree. There
wasn't any good reason to be touching the refname anyway...
|
|
|
|
|
|
| |
table for an average of NTUP_PER_BUCKET tuples/bucket, but cost_hashjoin
was assuming a target load of one tuple/bucket. This was causing a
noticeable underestimate of hashjoin costs.
|
| |
|