| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
| |
Not sure why some were this way, and others were already correct, but it
seems to have been like this for several years.
This caused problems on a few damaged platforms like AIX and IRIX which do
not support DST calculations for years before 1970.
Thanks to Andreas Zeugswetter <ZeugswetterA@wien.spardat.at> for finding
the problem.
|
|
|
|
| |
this file to match all the other files, and to be clearer.
|
| |
|
| |
|
|
|
|
| |
confusing, and clean up documentation.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I hope. I finally realized that we were going at it backwards: when
there are excess parentheses, they need to be treated as part of the
sub-SELECT, not as part of the surrounding expression. Although either
choice yields an unambiguous grammar, only this way produces a grammar
that is LALR(1). With the old approach we were guaranteed to fail on
either 'SELECT (((SELECT 2)) + 3)' or
'SELECT (((SELECT 2)) UNION SELECT 2)' depending on which way we
resolve the initial shift/reduce conflict. With the new way, the same
reduction track can be followed in both cases until we have advanced
far enough to know whether we are done with the sub-SELECT or not.
|
|
|
|
| |
on the old tuple's page while we are doing TOAST pushups.
|
| |
|
|
|
|
|
|
|
| |
given the fundamental restriction of not looking at transaction commit
data in pg_log. Use code that is actually based on tqual.c rather than
ad-hoc tests. Also write the tuple fetch loop using standard access
macros rather than ad-hoc code.
|
|
|
|
|
| |
Otherwise, newly connecting backends will still think the deleted DB is
valid, and will generate unexpected error messages.
|
|
|
|
|
| |
it now returns true if the aclitem argument exactly matches any one of
the elements of the aclitem[] argument. Per complaint from Wolff 1/10/01.
|
|
|
|
|
|
|
| |
are treated more like 'cancel' interrupts: the signal handler sets a
flag that is examined at well-defined spots, rather than trying to cope
with an interrupt that might happen anywhere. See pghackers discussion
of 1/12/01.
|
|
|
|
|
| |
rule. Needed to avoid failure when reloading a 7.0 pg_dump of a view
that has a NUMERIC column.
|
|
|
|
|
|
|
|
|
|
|
| |
are now critical sections, so as to ensure die() won't interrupt us while
we are munging shared-memory data structures. Avoid insecure intermediate
states in some code that proc_exit will call, like palloc/pfree. Rename
START/END_CRIT_CODE to START/END_CRIT_SECTION, since that seems to be
what people tend to call them anyway, and make them be called with () like
a function call, in hopes of not confusing pg_indent.
I doubt that this is sufficient to make SIGTERM safe anywhere; there's
just too much code that could get invoked during proc_exit().
|
|
|
|
| |
Wish they were all this easy ...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Support of variable size keys - new algorithm of insertion to tree
(GLI - gist layrered insertion). Previous algorithm was implemented
as described in paper by Joseph M. Hellerstein et.al
"Generalized Search Trees for Database Systems". This (old)
algorithm was not suitable for variable size keys and could be
not effective ( walking up-down ) in case of multiple levels split
Bug fixed:
1. fixed bug in gistPageAddItem - key values were written to disk
uncompressed. This caused failure if decompression function
does real job.
2. NULLs handling - we keep NULLs in tree. Right way is to remove them,
but we don't know how to inform vacuum about index statistics. This is
just cosmetic warning message (like in case with R-Tree),
but I'm not sure how to recognize real problem if we remove NULLs
and suppress this warning as Tom suggested.
3. various memory leaks
This work was done by Teodor Sigaev (teodor@stack.net) and
Oleg Bartunov (oleg@sai.msu.su).
|
| |
|
|
|
|
| |
DeadLockCheck().
|
|
|
|
| |
code would cluster, but table would magically lose its tempness.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
- no more elog(STOP) in StartupXLOG();
- both checkpoint' undo & redo are used to define
oldest on-line log file.
2. Ability to pre-allocate a few log files at checkpoint time
(wal_files option). Off by default.
|
|
|
|
|
|
|
|
| |
as both a GROUP BY item and an output expression, the top-level Group
node should just copy up the evaluated expression value from its input,
rather than re-evaluating the expression. Aside from any performance
benefit this might offer, this avoids a crash when there is a sub-SELECT
in said expression.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
inconsistent coding practices for handling Index values and booleans,
too.
|
|
|
|
|
|
|
|
|
| |
before calling RelationInvalidateHeapTuple(), which is bad because the
latter needs to look at the tuple data, which is in the shared disk
buffer. If another backend manages to recycle the buffer while this
is going on, we will compute the wrong hashindex for the tuple or
maybe even crash outright. Must hold buffer refcount until afterwards.
(This bug is not in 7.0.*; seems to be have introduced during WAL changes.)
|
|
|
|
| |
that leftover cancel/die requests cannot interfere with exit activities.
|
|
|
|
| |
of 6 Jan 2001 21:55.
|
|
|
|
|
|
|
|
| |
and burn. Just for added luck, change reading of CONST nodes so that
we do not need to consult pg_type rows while reading them; this means
that no database access occurs during stringToNode. This requires
changing the order in which const-node fields are written, which means
an initdb is forced.
|
|
|
|
|
| |
Disallow cases like adding constraints to sequences :-(, and eliminate
now-unnecessary search of pg_rewrite to decide if a relation is a view.
|
|
|
|
| |
error, so as to provide a starting point for debugging.
|
|
|
|
|
|
| |
in per-entry sub-memory-context, where they were supposed to go, rather
than in CacheMemoryContext where the code was putting them. Must've
suffered a severe brain fade when I wrote this :-(
|
|
|
|
|
| |
no longer the case). Add AND and TRAILING to ColLabel. All key words
except AS are now at least ColLabel's.
|
|
|
|
|
|
|
|
| |
sequences. This is done by disabling multi-byte awareness when it's
not necessary. This is kind of a workaround, not a perfect solution.
However, there is no ideal way to parse broken multi-byte character
sequences. So I guess this is the best way what we could do right
now...
|
|
|
|
| |
so that transactional control could guarantee the consistency.
|
|
|
|
|
| |
they don't themselves flush any cache entries, only add to to-do lists
that will be processed later.
|
|
|
|
|
|
|
|
|
|
|
| |
and revert documentation to describe the existing INHERITS clause
instead, per recent discussion in pghackers. Also fix implementation
of SQL_inheritance SET variable: it is not cool to look at this var
during the initial parsing phase, only during parse_analyze(). See
recent bug report concerning misinterpretation of date constants just
after a SET TIMEZONE command. gram.y really has to be an invariant
transformation of the query string to a raw parsetree; anything that
can vary with time must be done during parse analysis.
|
|
|
|
| |
table, per pghackers discussion around 22-Dec-00.
|
|
|
|
| |
used before ...
|
| |
|
|
|
|
| |
from sergiop@sinectis.com.ar.
|
|
|
|
| |
of early December 2000. COPY BINARY is now TOAST-safe.
|
|
|
|
| |
to it. Bad dog.
|
|
|
|
|
|
|
| |
Previous result did not have correct month boundaries so anything near edge
cases was suspect (e.g. April was in Q1 and July, August were lumped into
Q2).
Thanks to Denis Osadchy <osadchy@turbo.nsk.su> for the report.
|
|
|
|
|
|
|
|
| |
starting a new hashtable search no longer clobbers any other search
active anywhere in the system. Fix RelationCacheInvalidate() so that
it will not crash or go into an infinite loop if invoked recursively,
as for example by a second SI Reset message arriving while we are still
processing a prior one.
|
| |
|
|
|
|
| |
tuples for a relation. Needed to prevent Assert failure in CLUSTER.
|