| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
We no longer use GetNewOidWithIndex on pg_largeobject; rather,
pg_largeobject_metadata's regular OID column is considered the repository
of OIDs for large objects. The special functionality is still needed for
TOAST tables however.
|
|
|
|
|
|
|
|
|
|
| |
Building a shlib on AIX requires use of the mkldexport.sh script, but we
failed to install that, preventing its use from non-source-tree contexts.
Also, Makefile.aix had the wrong idea about where to find the installed
copy of the postgres.imp symbol file used by AIX.
Per report from John Pierce. Patch all the way back, since this has been
broken since the beginning of PGXS.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do read/write permissions checks at most once per large object descriptor,
not once per lo_read or lo_write call as before. The repeated tests were
quite useless in the read case since the snapshot-based tests were
guaranteed to produce the same answer every time. In the write case,
the extra tests could in principle detect revocation of write privileges
after a series of writes has started --- but there's a race condition there
anyway, since we'd check privileges before performing and certainly before
committing the write. So there's no real advantage to checking every
single time, and we might as well redefine it as "only check the first
time".
On the same reasoning, remove the LargeObjectExists checks in inv_write
and inv_truncate. We already checked existence when the descriptor was
opened, and checking again doesn't provide any real increment of safety
that would justify the cost.
|
|
|
|
|
| |
I extracted the refactoring patch from a larger patch that contained other
changes too, but missed one unintentional change and didn't test enough...
|
|
|
|
|
| |
This is just refactoring with no user-visible effect, to make the code more
readable.
|
| |
|
|
|
|
| |
It was apparently never necessary.
|
|
|
|
|
|
|
| |
Fix broken-on-bigendian-machines byte-swapping functions, add missed update
of alternate regression expected file, improve error reporting, remove some
unnecessary code, sync testlo64.c with current testlo.c (it seems to have
been cloned from a very old copy of that), assorted cosmetic improvements.
|
|
|
|
|
|
|
| |
Since postgres.h includes palloc.h, definitions that affect the latter
must be present before the former is included.
Per buildfarm results
|
|
|
|
|
|
|
| |
We already had those, but they forced modules to spell out the function
bodies twice. Eliminate some duplicates we had already grown.
Extracted from a somewhat larger patch from Andres Freund.
|
|
|
|
| |
Tomonaru Katsumata
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug was introduced by my patch to use the regular die/quickdie signal
handlers in walsender processes. I tried to make walsender exit at next
CHECK_FOR_INTERRUPTS() by setting ProcDiePending, but that's not enough, you
need to set InterruptPending too. On second thoght, it was not a very good
way to make walsender exit anyway, so use proc_exit(0) instead.
Also, send a CommandComplete message before exiting; that's what we did
before, and you get a nicer error message in the standby that way.
Reported by Thom Brown.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
4TB large objects (standard 8KB BLCKSZ case). For this purpose new
libpq API lo_lseek64, lo_tell64 and lo_truncate64 are added. Also
corresponding new backend functions lo_lseek64, lo_tell64 and
lo_truncate64 are added. inv_api.c is changed to handle 64-bit
offsets.
Patch contributed by Nozomi Anzai (backend side) and Yugo Nagata
(frontend side, docs, regression tests and example program). Reviewed
by Kohei Kaigai. Committed by Tatsuo Ishii with minor editings.
|
|
|
|
|
|
|
|
| |
The regular backend's main loop handles signal handling and error recovery
better than the current WAL sender command loop does. For example, if the
client hangs and a SIGTERM is received before starting streaming, the
walsender will now terminate immediately, rather than hang until the
connection times out.
|
|
|
|
| |
Per testing of previous patch.
|
|
|
|
|
|
|
|
|
|
| |
Our getnameinfo() replacement implementation in getaddrinfo.c failed
unless NI_NUMERICHOST and NI_NUMERICSERV were given as flags, because
it doesn't resolve host names, only numeric IPs. But per standard,
when those flags are not given, an implementation can still degrade to
not returning host names, so this restriction is unnecessary. When we
remove it, we can eliminate some code in postmaster.c that apparently
tried to work around that.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The initial transition value is stored as a text string and not fed to the
transition type's input function until runtime (so that values such as
"now" don't get frozen at creation time). Previously, CREATE AGGREGATE
didn't do anything with it but that, which meant that even erroneous values
would be accepted and not complained of until the aggregate is used. This
seems unhelpful, and it's confused at least one user, as in Rhys Stewart's
recent report. It seems worth taking a few more cycles to invoke the input
function and verify that the value is acceptable. We can't do this if the
transition type is polymorphic, but in normal aggregates we know the actual
transition type so we can call the right input function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous coding of the YYLLOC_DEFAULT macro behaved strangely for empty
productions, assigning the previous nonterminal's location as the parse
location of the result. The usefulness of that was (at best) debatable
already, but the real problem is that in list-generating nonterminals like
OptFooList: /* EMPTY */ { ... } | OptFooList Foo { ... } ;
the initially-identified location would get copied up, so that even a
nonempty list would be given a bogus parse location. Document how to work
around that, and do so for OptSchemaEltList, so that the error condition
just added for CREATE SCHEMA IF NOT EXISTS produces a sane error cursor.
So far as I can tell, there are currently no other cases where the
situation arises, so we don't need other instances of this coding yet.
|
| |
|
|
|
|
|
|
|
|
| |
Per discussion, schema-element subcommands are not allowed together with
this option, since it's not very obvious what should happen to the element
objects.
FabrÃzio de Royes Mello
|
|
|
|
|
|
|
|
|
|
| |
Remove duplicate implementation of catalog munging and miscellaneous
privilege and consistency checks. Instead rely on already existing data
in objectaddress.c to do the work.
Author: KaiGai Kohei
Tweaked by me
Reviewed by Robert Haas
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
examine_simple_variable supposed that any RTE_SUBQUERY rel it gets pointed
at must have been planned already. However, this isn't a safe assumption
because we must do selectivity estimation while generating indexscan paths,
and that code might look at join clauses involving a rel that the loop in
set_base_rel_sizes() hasn't reached yet. The simplest fix is to play dumb
in such a situation, that is give up trying to extract any stats for the
Var. This could possibly be improved by making a separate pass over the
RTE list to plan each unflattened subquery before we start the main
planning work --- but that would be pretty invasive and it doesn't seem
worth it, for now at least. (We couldn't just break set_base_rel_sizes()
into two loops: the prescan would need to handle all subquery rels in the
query, not only those in the current join subproblem.)
This bug was introduced in commit 1cb108efb0e60d87e4adec38e7636b6e8efbeb57,
although I think that subsequent changes may have exposed it more than it
was originally. Per bug #7580 from Maxim Boguk.
|
|
|
|
|
|
|
|
| |
Apparently this was considered in the original code (see commit
cec3b0a9) but I failed to notice that such entries would always be
skipped by the database check at the start of the loop.
Per bugs #7578 by Nikolay, #6116 by tushar.qa@gmail.com.
|
|
|
|
|
|
|
| |
You can now get the number of rows processed by a COPY statement in a
PL/pgSQL function with "GET DIAGNOSTICS x = ROW_COUNT".
Pavel Stehule, reviewed by Amit Kapila, with some editing by me.
|
|
|
|
|
|
|
| |
The comment explaining the naming of timeline history files was wrong, and
the history file was not being arhived.
Pointed out by Fujii Masao.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On some platforms these functions return NULL, rather than the more common
practice of returning a pointer to a zero-sized block of memory. Hack our
various wrapper functions to hide the difference by substituting a size
request of 1. This is probably not so important for the callers, who
should never touch the block anyway if they asked for size 0 --- but it's
important for the wrapper functions themselves, which mistakenly treated
the NULL result as an out-of-memory failure. This broke at least pg_dump
for the case of no user-defined aggregates, as per report from
Matthew Carrington.
Back-patch to 9.2 to fix the pg_dump issue. Given the lack of previous
complaints, it seems likely that there is no live bug in previous releases,
even though some of these functions were in place before that.
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of having each object type implement the catalog munging
independently, centralize knowledge about how to do it and expand the
existing table in objectaddress.c with enough data about each object
type to support this operation.
Author: KaiGai Kohei
Tweaks by me
Reviewed by Robert Haas
|
|
|
|
| |
Hopefully this makes the *BSD buildfarm animals happy.
|
|
|
|
|
|
| |
This is just refactoring, to make the functions accessible outside xlog.c.
A followup patch will make use of that, to allow fetching timeline history
files over streaming replication.
|
|
|
|
|
|
| |
This affects date_in(), and a couple of other funcions that use DecodeDate().
Hitoshi Harada
|
|
|
|
| |
don't accidentally remove it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Both programs got the "magic" string wrong, causing standard-conforming tar
implementations to believe the output was just legacy tar format without
any POSIX extensions. This doesn't actually matter that much, especially
since pg_dump failed to fill the POSIX fields anyway, but still there is
little point in emitting tar format if we can't be compliant with the
standard. In addition, pg_dump failed to write the EOF marker correctly
(there should be 2 blocks of zeroes not just one), pg_basebackup put the
numeric group ID in the wrong place, and both programs had a pretty
brain-dead idea of how to compute the checksum. Fix all that and improve
the comments a bit.
pg_restore is modified to accept either the correct POSIX-compliant "magic"
string or the previous value. This part of the change will need to be
back-patched to avoid an unnecessary compatibility break when a previous
version tries to read tar-format output from 9.3 pg_dump.
Brian Weaver and Tom Lane
|
| |
|
|
|
|
|
|
|
|
| |
This fixes another error in commit 9e8da0f75731aaa7605cf4656c21ea09e84d2eb1.
I neglected to make the mark/restore functionality save and restore the
current set of array key values, which led to strange behavior if an
IndexScan with ScalarArrayOpExpr quals was used as the inner side of a
mergejoin. Per bug #7570 from Melese Tesfaye.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This worked fine for superusers, but not for ordinary users trying to
cancel their own processes. Tweak the order the checks are done in so
that we correctly return SIGNAL_BACKEND_ERROR (which current callers
know to ignore without erroring out) so that an ordinary user can loop
through a resultset without fearing that a process might exit in the
middle of said looping -- causing the remaining processes to go
unsignalled.
Incidentally, the last in-core caller of IsBackendPid() is now gone.
However, the function is exported and must remain in place, because
there are plenty of callers in external modules.
Author: Josh Kupershmidt
Reviewed by Noah Misch
|
|
|
|
|
|
|
|
|
|
|
| |
This script is a bit slow, but still it only takes a fraction of the time
the bison run does, so the overhead doesn't seem intolerable. And we
definitely need some mechanical aid here, because people keep missing
the need to add new keywords to the appropriate keyword-list production.
While at it, I moved check_keywords.pl from src/tools into
src/backend/parser where it's actually used, and did some very minor
cleanup on the script.
|
|
|
|
|
|
| |
Once again, somebody who ought to know better forgot this. We really
need some automated cross-check on the keyword-list productions, I think.
Per report from Brian Weaver.
|
|
|
|
|
|
|
| |
This allows easily splitting configuration into many files, deployed in a
directory.
Magnus Hagander, Greg Smith, Selena Deckelmann, reviewed by Noah Misch.
|
|
|
|
|
|
|
|
|
| |
Produce a NOTICE when the label already exists, for consistency with other
CREATE IF NOT EXISTS commands. Also, fix the code so it produces something
more user-friendly than an index violation when the label already exists.
This not incidentally enables making a regression test that the previous
patch didn't make for fear of exposing an unpredictable OID in the results.
Also some wordsmithing on the documentation.
|
|
|
|
|
|
|
|
| |
If the label is already in the enum the statement becomes a no-op.
This will reduce the pain that comes from our not allowing this
operation inside a transaction block.
Andrew Dunstan, reviewed by Tom Lane and Magnus Hagander.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous scheme had bugs in some corner cases involving tables that had
been renamed since a view was made. This could result in dumped views that
failed to reload or reloaded incorrectly, as seen in bug #7553 from Lloyd
Albin, as well as in some pgsql-hackers discussion back in January. Also,
its behavior for printing EXPLAIN plans was sometimes confusing because of
willingness to use the same alias for multiple RTEs (it was Ashutosh
Bapat's complaint about that aspect that started the January thread).
To fix, ensure that each RTE in the query has a unique unqualified alias,
by modifying the alias if necessary (we add "_" and digits as needed to
create a non-conflicting name). Then we can just print its variables with
that alias, avoiding the confusing and bug-prone scheme of sometimes
schema-qualifying variable names. In EXPLAIN, it proves to be expedient to
take the further step of only assigning such aliases to RTEs that are
actually referenced in the query, since the planner has a habit of
generating extra RTEs with the same alias in situations such as
inheritance-tree expansion.
Although this fixes a bug of very long standing, I'm hesitant to back-patch
such a noticeable behavioral change. My experiments while creating a
regression test convinced me that actually incorrect output (as opposed to
confusing output) occurs only in very narrow cases, which is backed up by
the lack of previous complaints from the field. So we may be better off
living with it in released branches; and in any case it'd be smart to let
this ripen awhile in HEAD before we consider back-patching it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar changes were done to pg_hba.conf earlier already, this commit makes
pg_ident.conf to behave the same as pg_hba.conf.
This has two user-visible effects. First, if pg_ident.conf contains multiple
errors, the whole file is parsed at postmaster startup time and all the
errors are immediately reported. Before this patch, the file was parsed and
the errors were reported only when someone tries to connect using an
authentication method that uses the file, and the parsing stopped on first
error. Second, if you SIGHUP to reload the config files, and the new
pg_ident.conf file contains an error, the error is logged but the old file
stays in effect.
Also, regular expressions in pg_ident.conf are now compiled only once when
the file is loaded, rather than every time the a user is authenticated. That
should speed up authentication if you have a lot of regexps in the file.
Amit Kapila
|
|
|
|
|
| |
load_hba and load_ident load stuff in a separate memory context nowadays,
not in the current memory context.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These calls were removed in commit 4240e429d0c2d889d0cda23c618f94e12c13ade7
as part of a general refactoring and improvement of DDL locking. However,
there's a problem not solved by the rewrite, which is that GRANT/REVOKE
update pg_class.relacl without taking any particular lock on the target
table as such. If another backend fails to do AcceptInvalidationMessages,
it won't notice a recently-committed change in ACLs. Bug #7557 from Piotr
Czachur demonstrates that there's at least one code path in 9.2.0 in which
a command fails to do any AcceptInvalidationMessages calls at all, if the
current transaction already holds all the locks it will need.
Since we're hard up against the release deadline for 9.2.1, fix this by
putting back the AcceptInvalidationMessages calls in heap_openrv and
heap_openrv_extended, thereby restoring the historical behavior in this
area. We ought to look for a more elegant and perhaps more bulletproof
solution, but there's no time for that right now.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 9e8da0f75731aaa7605cf4656c21ea09e84d2eb1, I improved btree
to handle ScalarArrayOpExpr quals natively, so that constructs like
"indexedcol IN (list)" could be supported by index-only scans. Using
such a qual results in multiple scans of the index, under-the-hood.
I went to some lengths to ensure that this still produces rows in index
order ... but I failed to recognize that if a higher-order index column
is lacking an equality constraint, rescans can produce out-of-order
data from that column. Tweak the planner to not expect sorted output
in that case. Per trouble report from Robert McGehee.
|
|
|
|
|
| |
Not sure how we missed this case, but we did. Per bug #7551 from
Diego de Lima.
|