| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Hiroshi. ReleaseRelationBuffers now removes rel's buffers from pool,
instead of merely marking them nondirty. The old code would leave valid
buffers for a deleted relation, which didn't cause any known problems
but can't possibly be a good idea. There were several places which called
ReleaseRelationBuffers *and* FlushRelationBuffers, which is now
unnecessary; but there were others that did not. FlushRelationBuffers
no longer emits a warning notice if it finds dirty buffers to flush,
because with the current bufmgr behavior that's not an unexpected
condition. Also, FlushRelationBuffers will flush out all dirty buffers
for the relation regardless of block number. This ensures that
pg_upgrade's expectations are met about tuple on-row status bits being
up-to-date on disk. Lastly, tweak BufTableDelete() to clear the
buffer's tag so that no one can mistake it for being a still-valid
buffer for the page it once held. Formerly, the buffer would not be
found by buffer hashtable searches after BufTableDelete(), but it would
still be thought to belong to its old relation by the routines that
sequentially scan the shared-buffer array. Again I know of no bugs
caused by that, but it still can't be a good idea.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
RowExclusive (my fault). Also, install a check to prevent people
from trying COPY BINARY to stdout/from stdin. No way that will
work unless we redesign the frontend COPY protocol ... which is
not worth the trouble in the near future ...
|
|
|
|
|
|
|
|
|
|
|
|
| |
IRIX systems using the native compilers. A summary is:
- Various files use "//" as a comment delimiter in c files.
- Problems caused by assuming "char" is signed.
cash.in: building -signed the rules regression test fails as described
in FAQ_QNX4. If CHAR_MAX is "255U" then ((signed char)CHAR_MAX) is -1.
postmaster.c: random number regression test failed without this change.
- Some generic build issues and warning message cleanup.
David Kaelbling
|
|
|
|
|
|
|
| |
just use the portable form,
tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz
There were a bunch of places that weren't paying attention to configure's
result anyway (including configure itself!?); clean them up too.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
days. It seems to be a FAQ, and I think I know why. When creating a 'c'
language function, CREATE FUNCTION is fed the shared object filename,
and seems to succeed. Only when trying to use the function is an error
thrown, by which time the coder thinks something's wrong with executing
the code, not with loading it.
I think I once saw it proposed to load shared objects at function creation
time, but that idea was shot down on the grounds of resident memory bloat,
ISTR. Here's a patch for a compromise: all it does is stat() the file,
just like the loader code does, so that the errors caused by non existent
files, and no directory 'x' permissions (the most common ones, it seems),
get caught while the developer is still thinking about code loading. It
doesn't catch all errors (like the code not being readable by the postgres
user) but seems to catch the most common, without actually opening the file.
What do you think?
Ross
|
|
|
|
|
|
|
|
|
|
|
|
| |
indexes, apparently, nor on functional indexes with more than one input
column (force of natts = 1 was in the wrong branch of IF statement).
Coredumped if source relation contained any uncommitted tuples, due to
failure to test for success return from heap_fetch. Fetched tuple
was passed directly to heap_insert, which clobbers the TID and commit
status in the tuple header it's given, which meant that the source
relation's tuples all got trashed as the copy proceeded. Abort partway
through, and you're left with a lot of missing tuples.
I wonder what else is lurking here ...
|
|
|
|
|
| |
Make it behave correctly when there are more than two tables being
joined, also. Update regression test expected outputs.
|
| |
|
|
|
|
|
| |
It's still pretty fundamentally bogus :-(.
Freebie side benefit: ALTER TABLE RENAME works on indexes now.
|
| |
|
|
|
|
| |
failure of rename() call.
|
|
|
|
|
|
| |
pg_char_to_encoding() in multibyte disbaled case so that it does not
throw an error, rather return HARD CODED default value (currently SQL_ASCII).
This would solve the "non-mb backend vs. mb-enabled frontend" problem.
|
|
|
|
|
|
|
|
|
|
|
|
| |
cleanup, ie, as soon as we have caught the longjmp. This ensures that
current context will be a valid context throughout error cleanup. Before
it was possible that current context was pointing at a context that would
get deleted during cleanup, leaving any subsequent pallocs in deep
trouble. I was able to provoke an Assert failure when compiled with
asserts + -DCLOBBER_FREED_MEMORY, if I did something that would cause
an error to be reported by the backend large-object code, because indeed
that code operates in a context that gets deleted partway through xact
abort --- and CurrentMemoryContext was still pointing at it! Boo hiss.
|
|
|
|
| |
loops, but just arbitrarily failing at 1000 locks.
|
|
|
|
|
| |
because StatFp never got set in that case. Set it immediately before
use to eliminate such problems.
|
|
|
|
|
| |
thus causing failure if one sub-select had resjunk entries that the other
did not (cf. bug report from Espinosa 4/27/00).
|
|
|
|
|
|
|
|
| |
cases where joinclauses were present but some joins have to be made
by cartesian-product join anyway. An example is
SELECT * FROM a,b,c WHERE (a.f1 + b.f2 + c.f3) = 0;
Even though all the rels have joinclauses, we must join two of them
in cartesian style before we can use the join clause...
|
|
|
|
|
| |
that might be hanging about. Now it does ... amazing nobody noticed
this before ...
|
|
|
|
|
| |
not cause any compatibility problems because stored rules don't contain
plan nodes --- in fact, we don't even have a readfunc for Unique nodes.
|
|
|
|
|
|
|
| |
than BIND_DEFERRED. That way, if the loaded library has unresolved
references, shl_load fails cleanly. As we had it, shl_load would
succeed and then the dynlinker would call abort() when we try to call
into the loaded library. abort()ing a backend is uncool.
|
| |
|
|
|
|
| |
that will actually work on the column datatype.
|
|
|
|
|
| |
specified index access method. Clean up wording of some existing error
messages, too.
|
|
|
|
| |
fsync settings, so the -F option no longer needs to be treated as secure.
|
|
|
|
| |
'Twas my fault, I think.
|
|
|
|
|
|
|
| |
to give wrong results: it should be looking at inJoinSet not inFromCl.
Also, make 'modified' flag be local to ApplyRetrieveRule: we should
append a rule's quals to the query iff that particular rule applies,
not if we have fired any previously-considered rule for the query!
|
|
|
|
|
|
| |
(SELECT FROM table*). Cause was reference to 'eref' field of an RTE,
which is null in an RTE loaded from a stored rule parsetree. There
wasn't any good reason to be touching the refname anyway...
|
|
|
|
|
|
| |
table for an average of NTUP_PER_BUCKET tuples/bucket, but cost_hashjoin
was assuming a target load of one tuple/bucket. This was causing a
noticeable underestimate of hashjoin costs.
|
|
|
|
|
|
|
|
| |
(LIKE and regexp matches). These are not yet referenced in pg_operator,
so by default the system will continue to use eqsel/neqsel.
Also, tweak convert_to_scalar() logic so that common prefixes of strings
are stripped off, allowing better accuracy when all strings in a table
share a common prefix.
|
|
|
|
|
| |
prevent duplicate OIDs from being added. Clean up redundant error
messages.
|
| |
|
| |
|
|
|
|
|
|
| |
since it has no way to indicate to its caller that the constant is
actually NULL. This prevents coredump in cases like
WHERE textfield < null::text;
|
|
|
|
|
|
| |
subsequent elogs() in the same COPY operation to display the wrong
line number. Fix is to clear lineno only when elog level is such
that we will not return to caller.
|
|
|
|
|
| |
Fix spelling of "millennium".
Thanks to Mika Nystrom <mika@camembert.cs.caltech.edu> for spotting this.
|
|
|
|
|
|
| |
all platforms, not just SCO. The operation is undefined for Unix-domain
sockets anyway. It seems SCO is not the only platform that complains
instead of treating the call as a no-op.
|
|
|
|
|
|
|
|
|
|
|
|
| |
contained a sub-SELECT nested within an AND/OR tree that cnfify()
thought it should rearrange. Same physical sub-SELECT node could
end up linked into multiple places in resulting expression tree.
This is harmless for most node types, but not for SubLink.
Repair bug by making physical copies of subexpressions that get
logically duplicated by cnfify(). Also, tweak the heuristic that
decides whether it's a good idea to do cnfify() --- we don't really
want that to happen when it would cause multiple copies of a subselect
to be generated, I think.
|
|
|
|
| |
Jan
|
| |
|
| |
|
|
|
|
|
|
| |
whether to do fsync or not, and if so (which should be seldom) just
do the fsync immediately. This way we need not build data structures
in md.c/fd.c for blind writes.
|
|
|
|
|
|
|
|
|
| |
logged queries to 1024, truncating longer queries. That is about half of
the size I need (I have a union that is 2K long). Can someone consider
bumping it to 4K or so? Patch attached...
Regards,
Ed Loehr
|
| |
|
|
|
|
|
|
|
|
| |
as a shared dirtybit for each shared buffer. The shared dirtybit still
controls writing the buffer, but the local bit controls whether we need
to fsync the buffer's file. This arrangement fixes a bug that allowed
some required fsyncs to be missed, and should improve performance as well.
For more info see my post of same date on pghackers.
|
| |
|