| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
| |
remove the special case in ALTER DROP COLUMN to prohibit dropping a
table's last column.
|
|
|
|
|
|
|
|
| |
Vacuum must not advance pg_database.datvacuumxid nor truncate CLOG
unless it's processed *all* tables in the database. Vacuums run by
unprivileged users don't count.
(Beats head against nearest convenient wall...)
|
|
|
|
|
|
| |
Please apply the patch attached and this should be solved.
Alvaro Herrera
|
|
|
|
|
| |
heap_addheader is wrong because it doesn't cope with varlena fields,
notably indpred.
|
|
|
|
|
|
| |
no reason to worry about the tuple commit status bits until the tuple
is inserted in a relation by heapam.c. Also, improve comments for
heap_addheader().
|
| |
|
|
|
|
| |
at this area in the code.
|
| |
|
|
|
|
|
|
| |
recent WAL activity has occurred. Without this, it's possible that a
later crash might leave tuples on disk with un-updated commit status
bits.
|
|
|
|
|
|
|
|
|
| |
VACUUM FULL tuple moves. Store full-width t_infomask in WAL, rather
than storing low 8 bits and expecting to be able to reconstruct upper
bits. While at it, remove redundant t_oid field from WAL headers
(the OID, if present, is now recorded in the data portion of the tuple).
WAL version number bumped --- this does not force an initdb, you can
instead run pg_resetxlog after a clean shutdown of the old postmaster.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
causing the postmaster to crash when the trigger was running on a table
without a primary key.
I've also updated the docs to explicitly say that tables need primary
keys.
Steven Singer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
let's say this patch superscedes the previous one.
I have also attached a patch addressing the similar memory leak problem in
plpython. This includes a slight adjustment of the tests in the source
directory. The patch also includes a cosmetic change to remove a compiler
warning although I think the change makes the code look worse though.
BTW, by my reckoning the memory leak would occur with prepared plans and
without. If that is not the case then I've been barking up the wrong tree.
Nigel J. Andrews
|
|
|
|
|
|
| |
adding a missing sprintf().
Neil Conway
|
|
|
|
| |
handling in the backend.
|
| |
|
|
|
|
| |
number of forward references in the admin guide.
|
|
|
|
|
|
|
|
|
| |
ProcKill instead, where we still have a PGPROC with which to wait on
LWLocks. This fixes 'can't wait without a PROC structure' failures
occasionally seen during backend shutdown (I'm surprised they weren't
more frequent, actually). Add an Assert() to LWLockAcquire to help
catch any similar mistakes in future. Fix failure to update MyProcPid
for standalone backends and pgstat processes.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
jdbc datasource support for jdk1.4/jdbc3
Modified Files:
jdbc/build.xml jdbc/org/postgresql/Driver.java.in
jdbc/org/postgresql/jdbc2/optional/BaseDataSource.java
jdbc/org/postgresql/jdbc2/optional/PGObjectFactory.java
jdbc/org/postgresql/jdbc2/optional/PooledConnectionImpl.java
jdbc/org/postgresql/jdbc2/optional/PoolingDataSource.java
jdbc/org/postgresql/test/jdbc2/optional/BaseDataSourceTest.java
jdbc/org/postgresql/test/jdbc2/optional/OptionalTestSuite.java
jdbc/org/postgresql/test/jdbc3/Jdbc3TestSuite.java
Added Files:
jdbc/org/postgresql/jdbc3/Jdbc3ConnectionPool.java
jdbc/org/postgresql/jdbc3/Jdbc3ObjectFactory.java
jdbc/org/postgresql/jdbc3/Jdbc3PooledConnection.java
jdbc/org/postgresql/jdbc3/Jdbc3PoolingDataSource.java
jdbc/org/postgresql/jdbc3/Jdbc3SimpleDataSource.java
jdbc/org/postgresql/test/jdbc2/optional/PoolingDataSourceTest.java
jdbc/org/postgresql/test/jdbc3/Jdbc3ConnectionPoolTest.java
jdbc/org/postgresql/test/jdbc3/Jdbc3PoolingDataSourceTest.java
jdbc/org/postgresql/test/jdbc3/Jdbc3SimpleDataSourceTest.java
jdbc/org/postgresql/test/util/MiniJndiContext.java
jdbc/org/postgresql/test/util/MiniJndiContextFactory.java
|
| |
|
| |
|
|
|
|
|
| |
and PUBLIC EXECUTE, respectively. Per discussion about easing updates
from prior versions.
|
| |
|
| |
|
|
|
|
| |
document that scheme.
|
| |
|
|
|
|
|
|
| |
Fixes problem with cases like
SELECT * FROM foo t WHERE NOT EXISTS (SELECT remoteid FROM
(SELECT f1 as remoteid FROM foo WHERE f1 = t.f1) AS t1)
|
| |
|
|
|
|
|
|
|
|
|
| |
executor should not return the tuple as successfully marked, because in
fact it's been deleted. Not clear that this case has ever been seen
in practice (I think you'd have to write a SELECT FOR UPDATE that calls
a function that deletes some row the SELECT will visit later...) but we
should be consistent. Also add comments to several other places that
got it right but didn't explain what they were doing.
|
|
|
|
| |
editing.
|
|
|
|
|
| |
backends. Given that temp tables now store data locally in the local
buffer manager, these things are not going to work safely.
|
|
|
|
| |
to avoid having to write explicit casts. From Joe Conway.
|
|
|
|
| |
> * Add start time to pg_stat_activity
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The error message said so :-)
In 25.3. Using PL/Python
If the trigger "when" is BEFORE, you may return None or "OK"
from the Python function to indicate the tuple is unmodified, "SKIP"
to abort the event, or "MODIFIED" to indicate you've modified the tuple.
should read
If the trigger "when" is BEFORE, you may return None or "OK"
from the Python function to indicate the tuple is unmodified, "SKIP"
to abort the event, or "MODIFY" to indicate you've modified the tuple.
elein
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
up to
reaching the hard limit. After opening 16(=current REST_START value)
results via pg_exec, the next pg_exec tries to find an empty slot
forever :-( . In PgSetResultId file pgtclId.c in the for loop there
has to be done a break, if res_max ist reached. The piece of code
should look like
if (resid == connid->res_max)
{
resid = 0;
break; /* the break as to be added */
}
now everything works (double available results after reaching
RES_START up to reaching RES_HARD_MAX)
Gerhard Hintermayer
|
| |
|
|
|
|
|
| |
recent bug report). Fix processing of nailed-in-cache indexes;
it appears that REINDEX DATABASE has been broken for months :-(.
|
|
|
|
|
|
|
|
| |
contains the correct statistics. This is a partial solution for the
problem of allowing concurrent CREATE INDEX commands: unless they commit
at nearly the same instant, the second one will see the first one's
pg_class updates as committed, and won't try to update again, thus
avoiding the 'tuple concurrently updated' failure.
|
| |
|
| |
|
|
|
|
| |
do.
|
|
|
|
| |
versions of bison.
|
| |
|