| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
| |
the difference between checkpoints forced due to WAL segment consumption
and checkpoints forced for other reasons (such as CREATE DATABASE). Avoid
generating 'checkpoints are occurring too frequently' messages when the
checkpoint wasn't caused by WAL segment consumption. Per gripe from
Chris K-L.
|
|
|
|
|
|
|
|
| |
current time: provide a GetCurrentTimestamp() function that returns
current time in the form of a TimestampTz, instead of separate time_t
and microseconds fields. This is what all the callers really want
anyway, and it eliminates low-level dependencies on AbsoluteTime,
which is a deprecated datatype that will have to disappear eventually.
|
|
|
|
|
|
|
|
| |
and pg_auth_members. There are still many loose ends to finish in this
patch (no documentation, no regression tests, no pg_dump support for
instance). But I'm going to commit it now anyway so that Alvaro can
make some progress on shared dependencies. The catalog changes should
be pretty much done.
|
|
|
|
|
|
|
| |
in the database. The old behavior (reindex system catalogs only) is now
available as REINDEX SYSTEM. I did not add the complementary REINDEX USER
case since there did not seem to be consensus for this, but it would be
trivial to add later. Per recent discussions.
|
|
|
|
|
| |
of columns of a query result so that it can "see through" cursors and
prepared statements. Per gripe a couple months back from John DeSoi.
|
|
|
|
| |
hacking by Alvaro Herrera and Tom Lane.
|
|
|
|
|
| |
Euler Taveira de Oliveira
Matthias Schmidt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
performance problem pointed out by phil@vodafone: to wit, we were
spending O(N^2) time to check dropped-ness in an N-deep join tree,
even in the case where the tree was freshly constructed and couldn't
possibly mention any dropped columns. Instead of recursing in
get_rte_attribute_is_dropped(), change the data structure definition:
the joinaliasvars list of a JOIN RTE must have a NULL Const instead
of a Var at any position that references a now-dropped column. This
costs nothing during normal parse-rewrite-plan path, and instead we
have a linear-time update to make when loading a stored rule that
might contain now-dropped columns. While at it, move the responsibility
for acquring locks on relations referenced by rules into this separate
function (which I therefore chose to call AcquireRewriteLocks).
This saves effort --- namely, duplicated lock grabs in parser and rewriter
--- in the normal path at a cost of one extra non-locked heap_open()
in the stored-rule path; seems a good tradeoff. A fringe benefit is
that it is now *much* clearer that we acquire lock on relations referenced
in rules before we make any rewriter decisions based on their properties.
(I don't know of any bug of that ilk, but it wasn't exactly clear before.)
|
|
|
|
|
|
|
| |
to just around the bare recv() call that gets a command from the client.
The former placement in PostgresMain was unsafe because the intermediate
processing layers (especially SSL) use facilities such as malloc that are
not necessarily re-entrant. Per report from counterstorm.com.
|
|
|
|
|
|
|
|
| |
and DDL statements.
Backpatch fix to 8.0.X.
Per report from Murthy Kambhampaty
|
|
|
|
| |
Log prepare query during execute. Bruce Momjian
|
|
|
|
|
|
|
| |
only one argument. (Per recent discussion, the option to accept multiple
arguments is pretty useless for user-defined types, and would be a likely
source of security holes if it was used.) Simplify call sites of
output/send functions to not bother passing more than one argument.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to eliminate unnecessary deadlocks. This commit adds SELECT ... FOR SHARE
paralleling SELECT ... FOR UPDATE. The implementation uses a new SLRU
data structure (managed much like pg_subtrans) to represent multiple-
transaction-ID sets. When more than one transaction is holding a shared
lock on a particular row, we create a MultiXactId representing that set
of transactions and store its ID in the row's XMAX. This scheme allows
an effectively unlimited number of row locks, just as we did before,
while not costing any extra overhead except when a shared lock actually
has to be shared. Still TODO: use the regular lock manager to control
the grant order when multiple backends are waiting for a row lock.
Alvaro Herrera and Tom Lane.
|
|
|
|
|
|
|
| |
logic operations during planning. Seems cleaner to create two new Path
node types, instead --- this avoids duplication of cost-estimation code.
Also, create an enable_bitmapscan GUC parameter to control use of bitmap
plans.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
indexes. Extend the macros in include/catalog/*.h to carry the info
about hand-assigned OIDs, and adjust the genbki script and bootstrap
code to make the relations actually get those OIDs. Remove the small
number of RelOid_pg_foo macros that we had in favor of a complete
set named like the catname.h and indexing.h macros. Next phase will
get rid of internal use of names for looking up catalogs and indexes;
but this completes the changes forcing an initdb, so it looks like a
good place to commit.
Along the way, I made the shared relations (pg_database etc) not be
'bootstrap' relations any more, so as to reduce the number of hardwired
entries and simplify changing those relations in future. I'm not
sure whether they ever really needed to be handled as bootstrap
relations, but it seems to work fine to not do so now.
|
|
|
|
|
|
| |
former to 100 by default. Clean up some of the less necessary
dependencies on FUNC_MAX_ARGS; however, the biggie (FunctionCallInfoData)
remains.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
change saves a great deal of space in pg_proc and its primary index,
and it eliminates the former requirement that INDEX_MAX_KEYS and
FUNC_MAX_ARGS have the same value. INDEX_MAX_KEYS is still embedded
in the on-disk representation (because it affects index tuple header
size), but FUNC_MAX_ARGS is not. I believe it would now be possible
to increase FUNC_MAX_ARGS at little cost, but haven't experimented yet.
There are still a lot of vestigial references to FUNC_MAX_ARGS, which
I will clean up in a separate pass. However, getting rid of it
altogether would require changing the FunctionCallInfoData struct,
and I'm not sure I want to buy into that.
|
|
|
|
|
|
|
|
| |
executing a statement that fires triggers. Formerly this time was
included in "Total runtime" but not otherwise accounted for.
As a side benefit, we avoid re-opening relations when firing non-deferred
AFTER triggers, because the trigger code can re-use the main executor's
ResultRelInfo data structure.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
of tuples when passing data up through multiple plan nodes. A slot can now
hold either a normal "physical" HeapTuple, or a "virtual" tuple consisting
of Datum/isnull arrays. Upper plan levels can usually just copy the Datum
arrays, avoiding heap_formtuple() and possible subsequent nocachegetattr()
calls to extract the data again. This work extends Atsushi Ogawa's earlier
patch, which provided the key idea of adding Datum arrays to TupleTableSlots.
(I believe however that something like this was foreseen way back in Berkeley
days --- see the old comment on ExecProject.) A test case involving many
levels of join of fairly wide tables (about 80 columns altogether) showed
about 3x overall speedup, though simple queries will probably not be
helped very much.
I have also duplicated some code in heaptuple.c in order to provide versions
of heap_formtuple and friends that use "bool" arrays to indicate null
attributes, instead of the old convention of "char" arrays containing either
'n' or ' '. This provides a better match to the convention used by
ExecEvalExpr. While I have not made a concerted effort to get rid of uses
of the old routines, I think they should be deprecated and eventually removed.
|
|
|
|
|
|
|
|
|
| |
whether or not it is a security definer. Changing a function's strictness
is required by SQL2003, and the other capabilities make sense. Also, allow
an optional RESTRICT noise word to be specified, for SQL conformance.
Some trivial regression tests added and the documentation has been
updated.
|
|
|
|
| |
macros around strings that were missing them.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in GetNewTransactionId(). Since the limit value has to be computed
before we run any real transactions, this requires adding code to database
startup to scan pg_database and determine the oldest datfrozenxid.
This can conveniently be combined with the first stage of an attack on
the problem that the 'flat file' copies of pg_shadow and pg_group are
not properly updated during WAL recovery. The code I've added to
startup resides in a new file src/backend/utils/init/flatfiles.c, and
it is responsible for rewriting the flat files as well as initializing
the XID wraparound limit value. This will eventually allow us to get
rid of GetRawDatabaseInfo too, but we'll need an initdb so we can add
a trigger to pg_database.
|
|
|
|
|
|
| |
that return tuples (such as EXPLAIN). Per gripe from Michael Fuhr.
Side effect: fix an old bug that unintentionally disabled backward scans
for all SPI-created cursors.
|
|
|
|
| |
command counter more than necessary. Per report from Michael Fuhr.
|
|
|
|
|
|
|
|
| |
command. This is useful because we can allow truncation of tables
referenced by foreign keys, so long as the referencing table is
truncated in the same command.
Alvaro Herrera
|
| |
|
|
|
|
|
|
|
|
| |
Also performed an initial run through of upgrading our Copyright date to
extend to 2005 ... first run here was very simple ... change everything
where: grep 1996-2004 && the word 'Copyright' ... scanned through the
generated list with 'less' first, and after, to make sure that I only
picked up the right entries ...
|
|
|
|
|
|
|
|
|
|
| |
to be processed by GUC before InitPostgres, because any required lookup
of the encoding conversion function has to be done during InitializeClientEncoding.
So, I broke this last week by moving GUC processing to after InitPostgres :-(.
What we can do as a compromise is process non-SUSET variables during
command line scanning (the same as before), and postpone the processing
of only SUSET variables. None of the SUSET variables need to be set
before InitPostgres.
|
|
|
|
|
|
| |
collector until the transaction commits. Per recent discussion, this
should avoid confusing autovacuum when an updating transaction runs for
a long time.
|
|
|
|
|
|
|
| |
plain SUSET instead. Also delay processing of options received in
client connection request until after we know if the user is a superuser,
so that SUSET values can be set that way by legitimate superusers.
Per recent discussion.
|
|
|
|
|
|
| |
This handles the new multiple log_statement values.
Ed L.
|
| |
|
|
|
|
|
|
|
| |
Refactor code into something reasonably understandable, cause
use of the feature to not fail in standalone backends or in
EXEC_BACKEND case, fix sloppy guc.c table entries, make the
documentation minimally usable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
running contains VACUUM or a similar command that will internally start
and commit transactions. In such a case, the original caller values of
CurrentMemoryContext and CurrentResourceOwner will point to objects that
will be destroyed by the internal commit. We must restore these pointers
to point to the newly-manufactured transaction context and resource owner,
rather than possibly pointing to deleted memory.
Also tweak xact.c so that AbortTransaction and AbortSubTransaction
forcibly restore a sane value for CurrentResourceOwner, much as they
have always done for CurrentMemoryContext. I'm not certain this is
necessary but I'm feeling paranoid today.
Responds to Sean Chittenden's bug report of 4-Oct.
|
|
|
|
|
| |
it really hadn't, due to double output of previous command's response.
Fix prevents recursive entry to libpq routines. Found by Jan Wieck.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mode see a fresh snapshot for each command in the function, rather than
using the latest interactive command's snapshot. Also, suppress fresh
snapshots as well as CommandCounterIncrement inside STABLE and IMMUTABLE
functions, instead using the snapshot taken for the most closely nested
regular query. (This behavior is only sane for read-only functions, so
the patch also enforces that such functions contain only SELECT commands.)
As per my proposal of 6-Sep-2004; I note that I floated essentially the
same proposal on 19-Jun-2002, but that discussion tailed off without any
action. Since 8.0 seems like the right place to be taking possibly
nontrivial backwards compatibility hits, let's get it done now.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rather than when returning to the idle loop. This makes no particular
difference for interactively-issued queries, but it makes a big difference
for queries issued within functions: trigger execution now occurs before
the calling function is allowed to proceed. This responds to numerous
complaints about nonintuitive behavior of foreign key checking, such as
http://archives.postgresql.org/pgsql-bugs/2004-09/msg00020.php, and
appears to be required by the SQL99 spec.
Also take the opportunity to simplify the data structures used for the
pending-trigger list, rename them for more clarity, and squeeze out a
bit of space.
|
| |
|
| |
|
|
|
|
| |
Gavin Sherry
|
|
|
|
|
|
| |
by the SQL standard. For backwards compatibility, however, continue to
accept the syntax without. Minor editorialization in the reference pages
for these commands, too.
|
|
|
|
|
|
| |
compliance with SQL2003 spec syntax.
Oliver Jowett
|
|
|
|
|
|
|
|
|
|
| |
executed. Previously, the DECLARE would succeed but subsequent FETCHes
would fail since the parameter values supplied to DECLARE were not
propagated to the portal created for the cursor.
In support of this, add type Oids to ParamListInfo entries, which seems
like a good idea anyway since code that extracts a value can double-check
that it got the type of value it was expecting.
Oliver Jowett, with minor editorialization by Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
|
| |
number of active subtransaction XIDs in each backend's PGPROC entry,
and use this to avoid expensive probes into pg_subtrans during
TransactionIdIsInProgress. Extend EOXactCallback API to allow add-on
modules to get control at subxact start/end. (This is deliberately
not compatible with the former API, since any uses of that API probably
need manual review anyway.) Add basic reference documentation for
SAVEPOINT and related commands. Minor other cleanups to check off some
of the open issues for subtransactions.
Alvaro Herrera and Tom Lane.
|
|
|
|
|
|
| |
possible to trap an error inside a function rather than letting it
propagate out to PostgresMain. You still have to use AbortCurrentTransaction
to clean up, but at least the error handling itself will cooperate.
|
|
|
|
|
|
| |
followed by seeing EOF from client. If we want a safe session-kill
capability we will need to write one, not break our error handling
mechanism.
|
|
|
|
|
|
|
| |
SAVEPOINT/RELEASE/ROLLBACK-TO syntax. (Alvaro)
Cause COMMIT of a failed transaction to report ROLLBACK instead of
COMMIT in its command tag. (Tom)
Fix a few loose ends in the nested-transactions stuff.
|
|
|
|
|
|
|
|
| |
keep track of portal-related resources separately from transaction-related
resources. This allows cursors to work in a somewhat sane fashion with
nested transactions. For now, cursor behavior is non-subtransactional,
that is a cursor's state does not roll back if you abort a subtransaction
that fetched from the cursor. We might want to change that later.
|
|
|
|
|
|
|
| |
Add new postgresql.conf variables to point to data, pg_hba.conf, and
pg_ident.conf files.
Needs more documentation.
|
|
|
|
|
|
|
| |
performance front, but with feature freeze upon us I think it's time to
drive a stake in the ground and say that this will be in 7.5.
Alvaro Herrera, with some help from Tom Lane.
|