| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
| |
pfree'able result, since some callers expect to be able to pfree
the result of a pass-by-reference function. Per report from Chris Trawick.
|
|
|
|
|
|
|
|
|
| |
< failure.
> failure. This could be triggered by a user command or a timer.
< * Force archiving of partially-full WAL files when pg_stop_backup() is
< called or the server is stopped
> * Automatically force archiving of partially-filled WAL files when
> pg_stop_backup() is called or the server is stopped
|
| |
|
| |
|
|
|
|
| |
before you are done.
|
| |
|
| |
|
|
|
|
|
|
| |
>
> Doing this will allow administrators to know more easily when the
> archive contins all the files needed for point-in-time recovery.
|
| |
|
| |
|
|
|
|
|
| |
> * Force archiving of partially-full WAL files when pg_stop_backup() is
> called or the server is stopped
|
| |
|
|
|
|
| |
Not connected to anything useful yet ...
|
| |
|
| |
|
|
|
|
|
|
|
| |
return just a single tuple at a time. Currently the only such node
type is Hash, but I expect we will soon have indexscans that can return
tuple bitmaps. A side benefit is that EXPLAIN ANALYZE now shows the
correct tuple count for a Hash node.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
critical and noncritical contexts (an example of noncritical being
post-checkpoint removal of dead xlog segments). In the critical cases
the CRIT_SECTION mechanism will cause ERROR to be promoted to PANIC
anyway, and in the noncritical cases we shouldn't let an error take
down the entire database. Arguably there should be *no* explicit
PANIC errors in this module, only more START/END_CRIT_SECTION calls,
but I didn't go that far. (Yet.)
|
|
|
|
|
|
|
| |
when recycling a large number of xlog segments during checkpoint.
The former behavior searched from the same start point each time,
requiring O(checkpoint_segments^2) stat() calls to relocate all the
segments. Instead keep track of where we stopped last time through.
|
|
|
|
|
|
|
|
| |
which induced bug #1597 in addition to having several other misbehaviors
(like labeling the dump with a completion time having nothing to do with
reality). Instead just print out the desired strings where RestoreArchive
was already emitting the 'PostgreSQL database dump' and
'PostgreSQL database dump complete' strings.
|
|
|
|
|
|
| |
required by modern versions of GCC.
Niels Breet
|
| |
|
|
|
|
| |
> * -Use indexes for MIN() and MAX()
|
|
|
|
|
|
|
|
|
|
| |
assuming comparison of atttypid is sufficient. In a dropped column
atttypid will be 0, and we'd better check the physical-storage data
to make sure the tupdescs are physically compatible.
I do not believe there is a real risk before 8.0, since before that
we only used this routine to compare successive states of the tupdesc
for a particular relation. But 8.0's typcache.c might be comparing
arbitrary tupdescs so we'd better play it safer.
|
|
|
|
| |
fit of over-optimization.
|
|
|
|
|
| |
isn't presently set up to pass them an expected tuple descriptor. Bug has
been there since 7.3 but was just recently reported by Thomas Hallgren.
|
| |
|
|
|
|
|
|
| |
whose keys are OIDs. The only one that looks particularly performance
critical is the relcache hashtable, but as long as we've got the function
we may as well use it wherever it's applicable.
|
|
|
|
|
|
|
| |
indexes. Replace all heap_openr and index_openr calls by heap_open
and index_open. Remove runtime lookups of catalog OID numbers in
various places. Remove relcache's support for looking up system
catalogs by name. Bulky but mostly very boring patch ...
|
|
|
|
| |
thread support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
indexes. Extend the macros in include/catalog/*.h to carry the info
about hand-assigned OIDs, and adjust the genbki script and bootstrap
code to make the relations actually get those OIDs. Remove the small
number of RelOid_pg_foo macros that we had in favor of a complete
set named like the catname.h and indexing.h macros. Next phase will
get rid of internal use of names for looking up catalogs and indexes;
but this completes the changes forcing an initdb, so it looks like a
good place to commit.
Along the way, I made the shared relations (pg_database etc) not be
'bootstrap' relations any more, so as to reduce the number of hardwired
entries and simplify changing those relations in future. I'm not
sure whether they ever really needed to be handled as bootstrap
relations, but it seems to work fine to not do so now.
|
|
|
|
|
|
|
|
| |
avoid encroaching on the 'user' range of OIDs by allowing automatic
OID assignment to use values below 16k until we reach normal operation.
initdb not forced since this doesn't make any incompatible change;
however a lot of stuff will have different OIDs after your next initdb.
|
|
|
|
|
|
|
|
|
|
| |
of just a relation OID, thereby not having to open the relation for itself.
This actually saves code rather than adding it for most of the existing
callers, which had the rel open already. The main point though is to be
able to use this rather than plain addRangeTableEntry in setTargetTable,
thus saving one relation_openrv/relation_close cycle for every INSERT,
UPDATE, or DELETE. Seems to provide a several percent win on simple
INSERTs.
|
|
|
|
| |
On reflection, we ought to get rid of that mechanism entirely.
|
| |
|
|
|
|
|
|
|
|
|
| |
genbki.sh's pool (10000-16383) instead of being run-time assigned by
heap_insert. Might as well use the pool as long as it's there ...
I was a bit bemused to realize that it hadn't been in use at all since 7.2.
initdb not forced since this doesn't really affect anything. The OIDs
of casts and system indexes will change next time you do one, though.
|
|
|
|
|
|
|
|
|
| |
and PL languages during initdb. The default permissions for these objects
are the same as what we were assigning anyway, so there is no need to
expend space in the catalogs on them. The space cost is particularly
significant in pg_proc's indexes, which are bloated by about a factor of 2
by the full-table update, and can never really recover the space.
initdb not forced, since the change has no actual impact on behavior.
|
| |
|
|
|
|
| |
from index, since the aggregates ignore NULLs.
|
|
|
|
|
|
|
|
| |
be supported for all datatypes. Add CREATE AGGREGATE and pg_dump support
too. Add specialized min/max aggregates for bpchar, instead of depending
on text's min/max, because otherwise the possible use of bpchar indexes
cannot be recognized.
initdb forced because of catalog changes.
|
| |
|
|
|
|
|
|
| |
into indexscans on matching indexes. For the moment, it only handles
int4 and text datatypes; next step is to add a column to pg_aggregate
so that all MIN/MAX aggregates can be handled. Per my recent proposal.
|
|
|
|
|
|
| |
deferred triggers: either one can create more work for the other,
so we have to loop till it's all gone. Per example from andrew@supernews.
Add a regression test to help spot trouble in this area in future.
|
|
|
|
|
|
| |
while completing execution of the cursor's query. Otherwise we get wrong
answers or even crashes from non-volatile functions called by the query.
Per report from andrew@supernews.
|
| |
|
|
|
|
|
| |
that is a plain NULL and not a COALESCE with no inputs. Fixes crash
reported by Michael Williamson.
|
|
|
|
|
|
|
| |
decides whether to use hashed grouping instead of sort-plus-uniq
grouping. The function needs an annoyingly large number of parameters,
but this still seems like a win for legibility, since it removes over
a hundred lines from grouping_planner (which is still too big :-().
|
|
|
|
|
| |
into the wrong memory context, resulting in a query-lifespan memory leak.
Bug is new in 8.0, I believe. Per report from Rae Stiening.
|
|
|
|
|
| |
< * Allow additional tables to be specified in DELETE for joins
> * -Allow additional tables to be specified in DELETE for joins
|
|
|
|
|
|
| |
we can put words in ulink and the URL will still be printed.
per Peter
|