| Commit message (Collapse) | Author | Age |
... | |
|
|
|
| |
Not all compilers understand that elog(ERROR, ...) never returns.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some cases were still reporting errors and aborting, instead of a NOTICE
that the object was being skipped. This makes it more difficult to
cleanly handle pg_dump --clean, so change that to instead skip missing
objects properly.
Per bug #7873 reported by Dave Rolsky; apparently this affects a large
number of users.
Authors: Pavel Stehule and Dean Rasheed. Some tweaks by Álvaro Herrera
|
|
|
|
|
| |
gcc 4.8 was happy with having a duplicate typedef, but most compilers seem not
to be, per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GIN posting lists are now encoded using varbyte-encoding, which allows them
to fit in much smaller space than the straight ItemPointer array format used
before. The new encoding is used for both the lists stored in-line in entry
tree items, and in posting tree leaf pages.
To maintain backwards-compatibility and keep pg_upgrade working, the code
can still read old-style pages and tuples. Posting tree leaf pages in the
new format are flagged with GIN_COMPRESSED flag, to distinguish old and new
format pages. Likewise, entry tree tuples in the new format have a
GIN_ITUP_COMPRESSED flag set in a bit that was previously unused.
This patch bumps GIN_CURRENT_VERSION from 1 to 2. New indexes created with
version 9.4 will therefore have version number 2 in the metapage, while old
pg_upgraded indexes will have version 1. The code treats them the same, but
it might be come handy in the future, if we want to drop support for the
uncompressed format.
Alexander Korotkov and me. Reviewed by Tomas Vondra and Amit Langote.
|
|
|
|
|
| |
This will help in preparation of clean patches for upcoming
json work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A while back, 2c92edad48796119c83d7dbe6c33425d1924626d allowed
type_func_name_keywords to be used in more places, including role
identifiers. Unfortunately, that commit missed out on cases where
name_list was used for lists-of-roles, eg: for DROP ROLE. This
resulted in the unfortunate situation that you could CREATE a role
with a type_func_name_keywords-allowed identifier, but not DROP it
(directly- ALTER could be used to rename it to something which
could be DROP'd).
This extends allowing type_func_name_keywords to places where role
lists can be used.
Back-patch to 9.0, as 2c92edad48796119c83d7dbe6c33425d1924626d was.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All these constructs generate parse trees consisting of a Const and
a run-time type coercion (perhaps a FuncExpr or a CoerceViaIO). Modify
the raw parse output so that we end up with the original token's location
attached to the type coercion node while the Const has location -1;
before, it was the other way around. This makes no difference in terms
of what exprLocation() will say about the parse tree as a whole, so it
should not have any user-visible impact. The point of changing it is that
we do not want contrib/pg_stat_statements to treat these constructs as
replaceable constants. It will do the right thing if the Const has
location -1 rather than a valid location.
This is a pretty ugly hack, but then this code is ugly already; we should
someday replace this translation with special-purpose parse node(s) that
would allow ruleutils.c to reconstruct the original query text.
(See also commit 5d3fcc4c2e137417ef470d604fee5e452b22f6a7, which also
hacked location assignment rules for the benefit of pg_stat_statements.)
Back-patch to 9.2 where pg_stat_statements grew the ability to recognize
replaceable constants.
Kyotaro Horiguchi
|
|
|
|
|
|
|
|
|
| |
Unlike our other array functions, this considers the total number of
elements across all dimensions, and returns 0 rather than NULL when the
array has no elements. But it seems that both of those behaviors are
almost universally disliked, so hopefully that's OK.
Marko Tiikkaja, reviewed by Dean Rasheed and Pavel Stehule
|
|
|
|
|
|
|
|
|
| |
Commit a5bca4ef034f71175d46462963af2329d22068c2 accidentally changed
the semantics when the "skipping missing configuration file" is
emitted, because it forced OK to true instead of leaving the value
untouched.
Spotted by Tom Lane.
|
|
|
|
|
|
|
| |
There's no apparent way to trigger this, so I'm not going to worry
about back-patching it for now. But it's still wrong.
Marti Raudsepp
|
|
|
|
|
|
|
|
| |
Commit 138184adc5f7c60c184972e4d23f8cdb32aed77d plugged some but not
all of the leaks from commit 2a0c81a12c7e6c5ac1557b0f1f4a581f23fd4ca7.
This tightens things up some more.
Amit Kapila, per an observation by Tom Lane
|
|
|
|
|
|
| |
This is so that auto_explain can use it.
Kyotaro HORIGUCHI
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When there are consecutive spaces (or other non-format-code characters) in
the format, we should advance over exactly that many characters of input.
The previous coding mistakenly did a "skip whitespace" action between such
characters, possibly allowing more input to be skipped than the user
intended. We only need to skip whitespace just before an actual field.
This is really a bug fix, but given the minimal number of field complaints
and the risk of breaking applications coded to expect the old behavior,
let's not back-patch it.
Jeevan Chalke
|
|
|
|
| |
Sawada Masahiko
|
|
|
|
|
|
|
|
| |
Previously the presence of a nextval() prevented the
use of batch-mode COPY. This patch introduces a
special case just for nextval() functions. In future
we will introduce a general case solution for
labelling volatile functions as safe for use.
|
|
|
|
|
|
|
|
|
|
|
| |
krb5 has been deprecated since 8.3, and the recommended way to do
Kerberos authentication is using the GSSAPI authentication method
(which is still fully supported).
libpq retains the ability to identify krb5 authentication, but only
gives an error message about it being unsupported. Since all authentication
is initiated from the backend, there is no need to keep it at all
in the backend.
|
|
|
|
| |
Suggested by Tom
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tablespaces have a few options which can be set on them to give PG hints
as to how the tablespace behaves (perhaps it's faster for sequential
scans, or better able to handle random access, etc). These options were
only available through the ALTER TABLESPACE command.
This adds the ability to set these options at CREATE TABLESPACE time,
removing the need to do both a CREATE TABLESPACE and ALTER TABLESPACE to
get the correct options set on the tablespace.
Vik Fearing, reviewed by Michael Paquier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Historically, VACUUM has just reported its new_rel_tuples estimate
(the same thing it puts into pg_class.reltuples) to the stats collector.
That number counts both live and dead-but-not-yet-reclaimable tuples.
This behavior may once have been right, but modern versions of the
pgstats code track live and dead tuple counts separately, so putting
the total into n_live_tuples and zero into n_dead_tuples is surely
pretty bogus. Fix it to report live and dead tuple counts separately.
This doesn't really do much for situations where updating transactions
commit concurrently with a VACUUM scan (possibly causing double-counting or
omission of the tuples they add or delete); but it's clearly an improvement
over what we were doing before.
Hari Babu, reviewed by Amit Kapila
|
|
|
|
|
|
|
|
|
| |
This adds a 'MOVE' sub-command to ALTER TABLESPACE which allows moving sets of
objects from one tablespace to another. This can be extremely handy and avoids
a lot of error-prone scripting. ALTER TABLESPACE ... MOVE will only move
objects the user owns, will notify the user if no objects were found, and can
be used to move ALL objects or specific types of objects (TABLES, INDEXES, or
MATERIALIZED VIEWS).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've always allowed CREATE TABLE to create tables in the database's default
tablespace without checking for CREATE permissions on that tablespace.
Unfortunately, the original implementation of ALTER TABLE ... SET TABLESPACE
didn't pick up on that exception.
This changes ALTER TABLE ... SET TABLESPACE to allow the database's default
tablespace without checking for CREATE rights on that tablespace, just as
CREATE TABLE works today. Users could always do this through a series of
commands (CREATE TABLE ... AS SELECT * FROM ...; DROP TABLE ...; etc), so
let's fix the oversight in SET TABLESPACE's original implementation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These changes should generally improve correctness/maintainability.
A nice side benefit is that several kilobytes move from initialized
data to text segment, allowing them to be shared across processes and
probably reducing copy-on-write overhead while forking a new backend.
Unfortunately this doesn't seem to help libpq in the same way (at least
not when it's compiled with -fpic on x86_64), but we can hope the linker
at least collects all nominally-const data together even if it's not
actually part of the text segment.
Also, make pg_encname_tbl[] static in encnames.c, since there seems
no very good reason for any other code to use it; per a suggestion
from Wim Lewis, who independently submitted a patch that was mostly
a subset of this one.
Oskari Saarenmaa, with some editorialization by me
|
|
|
|
|
|
|
| |
Expand the messages when log_connections is enabled to include the
fact that SSL is used and the SSL cipher information.
Dr. Andreas Kunert, review by Marko Kreen
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 6f60fdd7015b032bf49273c99f80913d57eac284 accidentally removed a
call to XLogWalRcvSendHSFeedback() after flushing received WAL to disk.
The consequence is that when walsender is busy streaming WAL, it doesn't
send HS feedback messages. One is sent if nothing is received from the
master for 100ms, but if there's a steady stream of WAL, it never happens.
Backpatch to 9.3.
Andres Freund and Amit Kapila
|
|
|
|
|
|
|
|
| |
Coverity is complaining that the value returned by pg_strtok in
READ_LOCATION_FIELD and READ_BITMAPSET_FIELD macros is not used. In commit
39bfc94c86f1990e9db8ea3da0e82995cc1b76db, we did this to the other macros
to placate compilers that complained when the variable was completely
unused, this extends that to the last remaining macros.
|
|
|
|
|
|
|
|
|
| |
Previously, we did this just once per checkpoint, but that could make
Hot Standby take a long time to initialize. To avoid busying an
otherwise-idle system, we don't do this if no WAL has been written
since we did it last.
Andres Freund
|
|
|
|
|
|
| |
This could result in referencing uninitialized memory.
Michael Paquier, in response to a complaint from Andres Freund
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In ordinary operation, VACUUM must be careful to take a cleanup lock on
each leaf page of a btree index; this ensures that no indexscans could
still be "in flight" to heap tuples due to be deleted. (Because of
possible index-tuple motion due to concurrent page splits, it's not enough
to lock only the pages we're deleting index tuples from.) In Hot Standby,
the WAL replay process must likewise lock every leaf page. There were
several bugs in the code for that:
* The replay scan might come across unused, all-zero pages in the index.
While btree_xlog_vacuum itself did the right thing (ie, nothing) with
such pages, xlogutils.c supposed that such pages must be corrupt and
would throw an error. This accounts for various reports of replication
failures with "PANIC: WAL contains references to invalid pages". To
fix, add a ReadBufferMode value that instructs XLogReadBufferExtended
not to complain when we're doing this.
* btree_xlog_vacuum performed the extra locking if standbyState ==
STANDBY_SNAPSHOT_READY, but that's not the correct test: we won't open up
for hot standby queries until the database has reached consistency, and
we don't want to do the extra locking till then either, for fear of reading
corrupted pages (which bufmgr.c would complain about). Fix by exporting a
new function from xlog.c that will report whether we're actually in hot
standby replay mode.
* To ensure full coverage of the index in the replay scan, btvacuumscan
would emit a dummy WAL record for the last page of the index, if no
vacuuming work had been done on that page. However, if the last page
of the index is all-zero, that would result in corruption of said page,
since the functions called on it weren't prepared to handle that case.
There's no need to lock any such pages, so change the logic to target
the last normal leaf page instead.
The first two of these bugs were diagnosed by Andres Freund, the other one
by me. Fixes based on ideas from Heikki Linnakangas and myself.
This has been wrong since Hot Standby was introduced, so back-patch to 9.0.
|
|
|
|
|
|
|
|
|
| |
This code provides infrastructure for user backends to communicate
relatively easily with background workers. The message queue is
structured as a ring buffer and allows messages of arbitary length
to be sent and received.
Patch by me. Review by KaiGai Kohei and Andres Freund.
|
|
|
|
|
|
|
|
|
|
| |
This interface is intended to make it simple to divide a dynamic shared
memory segment into different regions with distinct purposes. It
therefore serves much the same purpose that ShmemIndex accomplishes for
the main shared memory segment, but it is intended to be more
lightweight.
Patch by me. Review by Andres Freund.
|
|
|
|
|
|
|
| |
Move FreeConfigVariables() later to make sure ErrorConfFile is valid
when we use it, and get rid of an unnecessary string copy operation.
Amit Kapila, kibitzed by me.
|
|
|
|
| |
Etsuro Fujita
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On second thought, commit 0c051c90082da0b7e5bcaf9aabcbd4f361137cdc was
over-hasty: rather than allowing this case, we ought to reject it for now.
That leaves the field clear for a future feature that allows the target
table to be re-specified in the FROM (or USING) clause, which will enable
left-joining the target table to something else. We can then also allow
LATERAL references to such an explicitly re-specified target table.
But allowing them right now will create ambiguities or worse for such a
feature, and it isn't something we documented 9.3 as supporting.
While at it, add a convenience subroutine to avoid having several copies
of the ereport for disalllowed-LATERAL-reference cases.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per reports from Andres Freund and Luke Campbell, a server failure during
set_pglocale_pgservice results in a segfault rather than a useful error
message, because the infrastructure needed to use ereport hasn't been
initialized; specifically, MemoryContextInit hasn't been called.
One known cause of this is starting the server in a directory it
doesn't have permission to read.
We could try to prevent set_pglocale_pgservice from using anything that
depends on palloc or elog, but that would be messy, and the odds of future
breakage seem high. Moreover there are other things being called in main.c
that look likely to use palloc or elog too --- perhaps those things
shouldn't be there, but they are there today. The best solution seems to
be to move the call of MemoryContextInit to very early in the backend's
real main() function. I've verified that an elog or ereport occurring
immediately after that is now capable of sending something useful to
stderr.
I also added code to elog.c to print something intelligible rather than
just crashing if MemoryContextInit hasn't created the ErrorContext.
This could happen if MemoryContextInit itself fails (due to malloc
failure), and provides some future-proofing against someone trying to
sneak in new code even earlier in server startup.
Back-patch to all supported branches. Since we've only heard reports of
this type of failure recently, it may be that some recent change has made
it more likely to see a crash of this kind; but it sure looks like it's
broken all the way back.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The standard typanalyze functions skip over values whose detoasted size
exceeds WIDTH_THRESHOLD (1024 bytes), so as to limit memory bloat during
ANALYZE. However, we (I think I, actually :-() failed to consider the
possibility that *every* non-null value in a column is too wide. While
compute_minimal_stats() seems to behave reasonably anyway in such a case,
compute_scalar_stats() just fell through and generated no pg_statistic
entry at all. That's unnecessarily pessimistic: we can still produce
valid stanullfrac and stawidth values in such cases, since we do include
too-wide values in the average-width calculation. Furthermore, since the
general assumption in this code is that too-wide values are probably all
distinct from each other, it seems reasonable to set stadistinct to -1
("all distinct").
Per complaint from Kadri Raudsepp. This has been like this since roughly
neolithic times, so back-patch to all supported branches.
|
|
|
|
| |
Per suggestion from Peter E and Alvaro
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new MultiXact freezing routines introduced by commit 8e9a16ab8f7
neglected to consider tuples that came from a pg_upgrade'd database; a
vacuum run that tried to freeze such tuples would die with an error such
as
ERROR: MultiXactId 11415437 does no longer exist -- apparent wraparound
To fix, ensure that GetMultiXactIdMembers is allowed to return empty
multis when the infomask bits are right, as is done in other callsites.
Per trouble report from F-Secure.
In passing, fix a copy&paste bug reported by Andrey Karpov from VIVA64
from their PVS-Studio static checked, that instead of setting relminmxid
to Invalid, we were setting relfrozenxid twice. Not an important
mistake because that code branch is about relations for which we don't
use the frozenxid/minmxid values at all in the first place, but seems to
warrants a fix nonetheless.
|
|
|
|
|
|
|
|
|
|
| |
Buildfarm member dunlin has been crashing since commit 8b49a60, but other
machines seem fine with that code. It turns out that removing the local
variables in ordered_set_startup() that are copies of fields in "qstate"
dodges the problem. This might cost a few cycles on register-rich
machines, but it's probably a wash on others, and in any case this code
isn't performance-critical. Thanks to Jeremy Drake for off-list
investigation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Makes the replay loop slightly more readable, by separating the concerns of
whether to stop and whether to delay, and how to extract the timestamp from
a record.
This has the user-visible change that the timestamp of the last applied
record is now updated after actually applying it. Before, it was updated
just before applying it. That meant that pg_last_xact_replay_timestamp()
could return the timestamp of a commit record that is in process of being
replayed, but not yet applied. Normally the difference is small, but if
min_recovery_apply_delay is set, there could be a significant delay between
reading a record and applying it.
Another behavioral change is that if you recover to a restore point, we stop
after the restore point record, not before it. It makes no difference as far
as running queries on the server is concerned, as applying a restore point
record changes nothing, but if examine the timeline history you will see
that the new timeline branched off just after the restore point record, not
before it. One practical consequence is that if you do PITR to the new
timeline, and set recovery target to the same named restore point again, it
will find and stop recovery at the same restore point. Conceptually, I think
it makes more sense to consider the restore point as part of the new
timeline's history than not.
In principle, setting the last-replayed timestamp before actually applying
the record was a bug all along, but it doesn't seem worth the risk to
backpatch, since min_recovery_apply_delay was only added in 9.4.
|
|
|
|
|
|
|
|
|
| |
Minor improvement to commit daa7527afc2274432094ebe7ceb03aa41f916607:
s_lock.h no longer has any need to mention PGSemaphoreData, so we can
rip out the #include that supplies that. In a non-HAVE_SPINLOCKS
build, this doesn't really buy much since we still need the #include
in spin.h --- but everywhere else, this reduces #include footprint by
some trifle, and helps keep the different locking facilities separate.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit c1352052ef1d4eeb2eb1d822a207ddc2d106cb13, I implemented an
optimization that assumed that a function's argument expressions would
either always return a set (ie multiple rows), or always not. This is
wrong however: we allow CASE expressions in which some arms return a set
of some type and others just return a scalar of that type. There may be
other examples as well. To fix, replace the run-time test of whether an
argument returned a set with a static precheck (expression_returns_set).
This adds a little bit of query startup overhead, but it seems barely
measurable.
Per bug #8228 from David Johnston. This has been broken since 8.0,
so patch all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of allocating a semaphore from the operating system for every
spinlock, allocate a fixed number of semaphores (by default, 1024)
from the operating system and multiplex all the spinlocks that get
created onto them. This could self-deadlock if a process attempted
to acquire more than one spinlock at a time, but since processes
aren't supposed to execute anything other than short stretches of
straight-line code while holding a spinlock, that shouldn't happen.
One motivation for this change is that, with the introduction of
dynamic shared memory, it may be desirable to create spinlocks that
last for less than the lifetime of the server. Without this change,
attempting to use such facilities under --disable-spinlocks would
quickly exhaust any supply of available semaphores. Quite apart
from that, it's desirable to contain the quantity of semaphores
needed to run the server simply on convenience grounds, since using
too many may make it harder to get PostgreSQL running on a new
platform, which is mostly the point of --disable-spinlocks in the
first place.
Patch by me; review by Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If pause_at_recovery_target is set, recovery pauses *before* applying the
target record, even if recovery_target_inclusive is set. If you then
continue with pg_xlog_replay_resume(), it will apply the target record
before ending recovery. In other words, if you log in while it's paused
and verify that the database looks OK, ending recovery changes its state
again, possibly destroying data that you were tring to salvage with PITR.
Backpatch to 9.1, this has been broken since pause_at_recovery_target was
added.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The docs say that only one of recovery_target_xid, recovery_target_time, or
recovery_target_name can be specified. But the code actually did something
different, so that a name overrode time, and xid overrode both time and name.
Now the target specified last takes effect, whether it's an xid, time or
name.
With this patch, we still accept multiple recovery_target settings, even
though docs say that only one can be specified. It's a general property of
the recovery.conf file parser that you if you specify the same option twice,
the last one takes effect, like with postgresql.conf.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the transition functions, we don't really need to recheck this after the
first call. I had been feeling paranoid about possibly getting a non-null
argument value in some other context; but it's probably game over anyway
if we have a non-null "internal" value that's not what we are expecting.
In the final functions, the general convention in pre-existing final
functions seems to be that an Assert() is good enough, so do it like that
here too.
This seems to save a few tenths of a percent of overall query runtime,
which isn't much, but still it's just overhead if there's not a plausible
case where the checks would fire.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Keep a pre-initialized FunctionCallInfoData in AggStatePerAggData, and
re-use that at each row instead of doing InitFunctionCallInfoData each
time. This saves only half a dozen assignments and maybe some stack
manipulation, and yet that seems to be good for a percent or two of the
overall query run time for simple aggregates such as count(*). The cost
is that the FunctionCallInfoData (which is about a kilobyte, on 64-bit
machines) stays allocated for the duration of the query instead of being
short-lived stack data. But we're already paying an equivalent space cost
for each regular FuncExpr or OpExpr node, so I don't feel bad about paying
it for aggregate functions. The code seems a little cleaner this way too,
since the number of things passed to advance_transition_function decreases.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When starting WAL replay from an online checkpoint, the last replayed WAL
record variable was initialized using the checkpoint record's location, even
though the records between the REDO location and the checkpoint record had
not been replayed yet. That was noted as "slightly confusing" but harmless
in the comment, but in some cases, it fooled CheckRecoveryConsistency to
incorrectly conclude that we had already reached a consistent state
immediately at the beginning of WAL replay. That caused the system to accept
read-only connections in hot standby mode too early, and also PANICs with
message "WAL contains references to invalid pages".
Fix by initializing the variables to the REDO location instead.
In 9.2 and above, change CheckRecoveryConsistency() to use
lastReplayedEndRecPtr variable when checking if backup end location has
been reached. It was inconsistently using EndRecPtr for that check, but
lastReplayedEndRecPtr when checking min recovery point. It made no
difference before this patch, because in all the places where
CheckRecoveryConsistency was called the two variables were the same, but
it was always an accident waiting to happen, and would have been wrong
after this patch anyway.
Report and analysis by Tomonari Katsumata, bug #8686. Backpatch to 9.0,
where hot standby was introduced.
|
|
|
|
|
| |
Update all files in head, and files COPYRIGHT and legal.sgml in all back
branches.
|
|
|
|
|
|
|
|
|
| |
I failed to think much about UPDATE/DELETE when implementing LATERAL :-(.
The implemented behavior ended up being that subqueries in the FROM or
USING clause (respectively) could access the update/delete target table as
though it were a lateral reference; which seems fine if they said LATERAL,
but certainly ought to draw an error if they didn't. Fix it so you get a
suitable error when you omit LATERAL. Per report from Emre Hasegeli.
|
|
|
|
|
|
|
| |
MSVC doesn't know that elog(ERROR) doesn't return, and gives a warning about
missing return. Silence that.
Amit Kapila
|