| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On some platforms these functions return NULL, rather than the more common
practice of returning a pointer to a zero-sized block of memory. Hack our
various wrapper functions to hide the difference by substituting a size
request of 1. This is probably not so important for the callers, who
should never touch the block anyway if they asked for size 0 --- but it's
important for the wrapper functions themselves, which mistakenly treated
the NULL result as an out-of-memory failure. This broke at least pg_dump
for the case of no user-defined aggregates, as per report from
Matthew Carrington.
Back-patch to 9.2 to fix the pg_dump issue. Given the lack of previous
complaints, it seems likely that there is no live bug in previous releases,
even though some of these functions were in place before that.
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of having each object type implement the catalog munging
independently, centralize knowledge about how to do it and expand the
existing table in objectaddress.c with enough data about each object
type to support this operation.
Author: KaiGai Kohei
Tweaks by me
Reviewed by Robert Haas
|
|
|
|
|
|
|
|
|
|
|
|
| |
We had a number of variants on the theme of "malloc or die", with the
majority named like "pg_malloc", but by no means all. Standardize on the
names pg_malloc, pg_malloc0, pg_realloc, pg_strdup. Get rid of pg_calloc
entirely in favor of using pg_malloc0.
This is an essentially cosmetic change, so no back-patch. (I did find
a couple of places where psql and pg_dump were using plain malloc or
strdup instead of the pg_ versions, but they don't look significant
enough to bother back-patching.)
|
|
|
|
| |
Fujii Masao
|
|
|
|
|
|
| |
timeval.t_sec is of type time_t, which is not always compatible with long.
I'm not sure if this was just harmless warning or a real bug, but this
fixes it, anyway.
|
|
|
|
| |
checking pg_upgrade.
|
|
|
|
| |
Hopefully this makes the *BSD buildfarm animals happy.
|
|
|
|
|
|
| |
This is just refactoring, to make the functions accessible outside xlog.c.
A followup patch will make use of that, to allow fetching timeline history
files over streaming replication.
|
|
|
|
|
|
| |
This affects date_in(), and a couple of other funcions that use DecodeDate().
Hitoshi Harada
|
|
|
|
| |
don't accidentally remove it.
|
|
|
|
|
|
|
|
| |
The error messages they generate are not portable enough.
Also, since the only point of the alter_generic_1 expected file was to
cover platforms with no collation support, it's now useless, so remove
it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On reflection (especially after noticing how many buildfarm critters have
__builtin_types_compatible_p but not _Static_assert), it seems like we
ought to try a bit harder to make these macros do something everywhere.
The initial cut at it would have been no help to code that is compiled only
on platforms without _Static_assert, for instance; and in any case not all
our contributors do their initial coding on the latest gcc version.
Some googling about static assertions turns up quite a bit of prior art
for making it work in compilers that lack _Static_assert. The method
that seems closest to our needs involves defining a struct with a bit-field
that has negative width if the assertion condition fails. There seems no
reliable way to get the error message string to be output, but throwing a
compile error with a confusing message is better than missing the problem
altogether.
In the same spirit, if we don't have __builtin_types_compatible_p we can at
least insist that the variable have the same width as the type. This won't
catch errors such as "wrong pointer type", but it's far better than
nothing.
In addition to changing the macro definitions, adjust a
compile-time-constant Assert in contrib/hstore to use StaticAssertStmt,
so we can get some buildfarm coverage on whether that macro behaves sanely
or not. There's surely more places that could be converted, but this is
the first one I came across.
|
|
|
|
|
|
|
| |
Currently, the macros only work with fairly recent gcc versions, but there
is room to expand them to other compilers that have comparable features.
Heavily revised and autoconfiscated version of a patch by Andres Freund.
|
| |
|
|
|
|
| |
There are apparently some incompatibilities, per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The tar output module did some very ugly and ultimately incorrect hacking
on COPY commands to try to get them to work in the context of restoring a
deconstructed tar archive. In particular, it would fail altogether for
table names containing any upper-case characters, since it smashed the
command string to lower-case before modifying it (and, just to add insult
to injury, did that in a way that would fail in multibyte encodings).
I don't see any particular value in being flexible about the case of the
command keywords, since the string will just have been created by
dumpTableData, so let's get rid of the whole case-folding thing.
Also, it doesn't seem to meet the POLA for the script to restore data only
in COPY mode, so add \i commands to make it have comparable behavior in
--inserts mode.
Noted while looking at the tar-output code in connection with Brian
Weaver's patch.
|
|
|
|
|
| |
Many distributors use this, so we might as well see the warnings as
well.
|
|
|
|
|
| |
Since Python 2.2 is no longer supported, we can now use Py_RETURN_TRUE
and Py_RETURN_FALSE instead of the old workaround.
|
|
|
|
|
| |
oid is a numeric type, so transform it to the appropriate Python
numeric type like the other ones.
|
|
|
|
|
|
|
|
|
| |
The original only expected file failed to consider machines without
non-default collation support. Per buildfarm.
Also, move the test to another parallel group; the one it was originally
put in is already full according to comments in the schedule file. Per
note from Tom Lane.
|
| |
|
|
|
|
|
|
|
| |
This makes refactoring of parts of the ALTER command safe(r) because we
ensure no change in functionality.
Author: KaiGai Kohei
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Both programs got the "magic" string wrong, causing standard-conforming tar
implementations to believe the output was just legacy tar format without
any POSIX extensions. This doesn't actually matter that much, especially
since pg_dump failed to fill the POSIX fields anyway, but still there is
little point in emitting tar format if we can't be compliant with the
standard. In addition, pg_dump failed to write the EOF marker correctly
(there should be 2 blocks of zeroes not just one), pg_basebackup put the
numeric group ID in the wrong place, and both programs had a pretty
brain-dead idea of how to compute the checksum. Fix all that and improve
the comments a bit.
pg_restore is modified to accept either the correct POSIX-compliant "magic"
string or the previous value. This part of the change will need to be
back-patched to avoid an unnecessary compatibility break when a previous
version tries to read tar-format output from 9.3 pg_dump.
Brian Weaver and Tom Lane
|
| |
|
|
|
|
|
|
|
|
| |
This fixes another error in commit 9e8da0f75731aaa7605cf4656c21ea09e84d2eb1.
I neglected to make the mark/restore functionality save and restore the
current set of array key values, which led to strange behavior if an
IndexScan with ScalarArrayOpExpr quals was used as the inner side of a
mergejoin. Per bug #7570 from Melese Tesfaye.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This worked fine for superusers, but not for ordinary users trying to
cancel their own processes. Tweak the order the checks are done in so
that we correctly return SIGNAL_BACKEND_ERROR (which current callers
know to ignore without erroring out) so that an ordinary user can loop
through a resultset without fearing that a process might exit in the
middle of said looping -- causing the remaining processes to go
unsignalled.
Incidentally, the last in-core caller of IsBackendPid() is now gone.
However, the function is exported and must remain in place, because
there are plenty of callers in external modules.
Author: Josh Kupershmidt
Reviewed by Noah Misch
|
|
|
|
|
|
|
|
|
|
|
| |
This script is a bit slow, but still it only takes a fraction of the time
the bison run does, so the overhead doesn't seem intolerable. And we
definitely need some mechanical aid here, because people keep missing
the need to add new keywords to the appropriate keyword-list production.
While at it, I moved check_keywords.pl from src/tools into
src/backend/parser where it's actually used, and did some very minor
cleanup on the script.
|
|
|
|
|
|
|
|
|
|
|
| |
There were assorted places where unreserved keywords were not treated the
same as T_WORD (that is, a random unrecognized identifier). Fix them.
It might not always be possible to allow this, but it is in all these
places, so I don't see any downside.
Per gripe from Jim Wilson. Arguably this is a bug fix, but given the lack
of other complaints and the ease of working around it (just quote the
word), I won't risk back-patching.
|
|
|
|
|
|
| |
Once again, somebody who ought to know better forgot this. We really
need some automated cross-check on the keyword-list productions, I think.
Per report from Brian Weaver.
|
|
|
|
|
|
|
| |
This allows easily splitting configuration into many files, deployed in a
directory.
Magnus Hagander, Greg Smith, Selena Deckelmann, reviewed by Noah Misch.
|
|
|
|
|
| |
Small oversight in commit 0f524ea0cf388a149f362e48a33c01662eeddc04 ...
per report from Grazvydas Valeika.
|
|
|
|
| |
Jan Urbański
|
| |
|
|
|
|
|
|
|
|
|
| |
Produce a NOTICE when the label already exists, for consistency with other
CREATE IF NOT EXISTS commands. Also, fix the code so it produces something
more user-friendly than an index violation when the label already exists.
This not incidentally enables making a regression test that the previous
patch didn't make for fear of exposing an unpredictable OID in the results.
Also some wordsmithing on the documentation.
|
|
|
|
|
|
|
|
| |
If the label is already in the enum the statement becomes a no-op.
This will reduce the pain that comes from our not allowing this
operation inside a transaction block.
Andrew Dunstan, reviewed by Tom Lane and Magnus Hagander.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous scheme had bugs in some corner cases involving tables that had
been renamed since a view was made. This could result in dumped views that
failed to reload or reloaded incorrectly, as seen in bug #7553 from Lloyd
Albin, as well as in some pgsql-hackers discussion back in January. Also,
its behavior for printing EXPLAIN plans was sometimes confusing because of
willingness to use the same alias for multiple RTEs (it was Ashutosh
Bapat's complaint about that aspect that started the January thread).
To fix, ensure that each RTE in the query has a unique unqualified alias,
by modifying the alias if necessary (we add "_" and digits as needed to
create a non-conflicting name). Then we can just print its variables with
that alias, avoiding the confusing and bug-prone scheme of sometimes
schema-qualifying variable names. In EXPLAIN, it proves to be expedient to
take the further step of only assigning such aliases to RTEs that are
actually referenced in the query, since the planner has a habit of
generating extra RTEs with the same alias in situations such as
inheritance-tree expansion.
Although this fixes a bug of very long standing, I'm hesitant to back-patch
such a noticeable behavioral change. My experiments while creating a
regression test convinced me that actually incorrect output (as opposed to
confusing output) occurs only in very narrow cases, which is backed up by
the lack of previous complaints from the field. So we may be better off
living with it in released branches; and in any case it'd be smart to let
this ripen awhile in HEAD before we consider back-patching it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar changes were done to pg_hba.conf earlier already, this commit makes
pg_ident.conf to behave the same as pg_hba.conf.
This has two user-visible effects. First, if pg_ident.conf contains multiple
errors, the whole file is parsed at postmaster startup time and all the
errors are immediately reported. Before this patch, the file was parsed and
the errors were reported only when someone tries to connect using an
authentication method that uses the file, and the parsing stopped on first
error. Second, if you SIGHUP to reload the config files, and the new
pg_ident.conf file contains an error, the error is logged but the old file
stays in effect.
Also, regular expressions in pg_ident.conf are now compiled only once when
the file is loaded, rather than every time the a user is authenticated. That
should speed up authentication if you have a lot of regexps in the file.
Amit Kapila
|
|
|
|
|
| |
load_hba and load_ident load stuff in a separate memory context nowadays,
not in the current memory context.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These calls were removed in commit 4240e429d0c2d889d0cda23c618f94e12c13ade7
as part of a general refactoring and improvement of DDL locking. However,
there's a problem not solved by the rewrite, which is that GRANT/REVOKE
update pg_class.relacl without taking any particular lock on the target
table as such. If another backend fails to do AcceptInvalidationMessages,
it won't notice a recently-committed change in ACLs. Bug #7557 from Piotr
Czachur demonstrates that there's at least one code path in 9.2.0 in which
a command fails to do any AcceptInvalidationMessages calls at all, if the
current transaction already holds all the locks it will need.
Since we're hard up against the release deadline for 9.2.1, fix this by
putting back the AcceptInvalidationMessages calls in heap_openrv and
heap_openrv_extended, thereby restoring the historical behavior in this
area. We ought to look for a more elegant and perhaps more bulletproof
solution, but there's no time for that right now.
|
|
|
|
| |
DST law changes in Fiji.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 9e8da0f75731aaa7605cf4656c21ea09e84d2eb1, I improved btree
to handle ScalarArrayOpExpr quals natively, so that constructs like
"indexedcol IN (list)" could be supported by index-only scans. Using
such a qual results in multiple scans of the index, under-the-hood.
I went to some lengths to ensure that this still produces rows in index
order ... but I failed to recognize that if a higher-order index column
is lacking an equality constraint, rescans can produce out-of-order
data from that column. Tweak the planner to not expect sorted output
in that case. Per trouble report from Robert McGehee.
|
|
|
|
|
| |
Not sure how we missed this case, but we did. Per bug #7551 from
Diego de Lima.
|
|
|
|
| |
It prevented the libpq directory from being installable by itself.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, we are making mangled copies of plpython/{expected,sql} to
plpython/python3/{expected,sql}, and run the tests in
plpython/python3. This has the disadvantage that the regression.diffs
file, if any, ends up in plpython/python3, which is not the normal
location. If we instead make the mangled copies in
plpython/{expected,sql}/python3/, we can run the tests from the normal
directory, regression.diffs ends up the normal place, and the
pg_regress invocation also becomes a lot simpler. It's also more
obvious at run time what's going on, because the tests end up being
named "python3/something" in the test output.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some experimentation with examples similar to bug #7539 has convinced me
that indxpath.c's original implementation of parameterized-path generation
was several bricks shy of a load. In general, if we are relying on a
particular outer rel or set of outer rels for a parameterized path, the
path should use every indexable join clause that's available from that rel
or rels. Any join clauses that get left out of the indexqual will end up
getting applied as plain filter quals (qpquals), and that's generally a
significant loser compared to having the index AM enforce them. (This is
particularly true with btree, which can skip the index scan entirely if
it can see that the given indexquals are mutually contradictory.) The
original heuristics failed to ensure this, though, and were overly
complicated anyway. Rewrite to make the code explicitly identify each
useful set of outer rels and then select all applicable join clauses for
each one. The one plan that changes in the regression tests is in fact
for the better according to the planner's cost estimates.
(Note: this is not a correctness issue but just a matter of plan quality.
I don't yet know what is going on in bug #7539, but I don't expect this
change to fix that.)
|
|
|
|
|
|
|
|
|
| |
Recovery code documents clearly that a shutdown checkpoint is executed at
end of recovery - a shutdown checkpoint WAL record is written but the buffer
manager had been altered to treat end of recovery as a normal checkpoint.
This bug exacerbates the bufmgr relpersistence bug.
Bug spotted by Andres Freund, patch by me.
|
|
|
|
|
|
| |
Looks like the correct size of DOS-ified tenk.data is 680800 not 680801.
(I got the latter from a version of unix2dos that appends a trailing ^Z,
which evidently is not git's practice.)
|
|
|
|
|
|
|
| |
- ALTER DOMAIN ... DROP/RENAME/VALIDATE CONSTRAINT
- ALTER TABLE ... RENAME/VALIDATE CONSTRAINT
- COMMENT ON CONSTRAINT
- SET CONSTRAINTS
|