| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
| |
Author: Higuchi-san ("Higuchi, Daisuke" <higuchi.daisuke@jp.fujitsu.com>)
|
|
|
|
|
|
|
|
|
|
|
|
| |
So far ECPG programs had to treat binary data for bytea column as 'char' type.
But this meant converting from/to escaped format with PQunescapeBytea/
PQescapeBytea() and therefore forcing users to add unnecessary code and cost
for the conversion in runtime. By adding a dedicated datatype for bytea most of
this special handling is no longer needed.
Author: Matsumura-san ("Matsumura, Ryo" <matsumura.ryo@jp.fujitsu.com>)
Discussion: https://postgr.es/m/flat/03040DFF97E6E54E88D3BFEE5F5480F737A141F9@G01JPEXMBYT04
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DECLARE STATEMENT is a statement that lets users declare an identifier
pointing at a connection. This identifier will be used in other embedded
dynamic SQL statement such as PREPARE, EXECUTE, DECLARE CURSOR and so on.
When connecting to a non-default connection, the AT clause can be used in
a DECLARE STATEMENT once and is no longer needed in every dynamic SQL
statement. This makes ECPG applications easier and more efficient. Moreover,
writing code without designating connection explicitly improves portability.
Authors: Ideriha-san ("Ideriha, Takeshi" <ideriha.takeshi@jp.fujitsu.com>)
Kuroda-san ("Kuroda, Hayato" <kuroda.hayato@jp.fujitsu.com>)
Discussion: https://postgr.es/m4E72940DA2BF16479384A86D54D0988A565669DF@G01JPEXMBKW04
|
|
|
|
|
|
| |
The function called can result in an out of memory error that subsequently was
disregarded. Instead it should set the appropriate SQL error variables and be
checked by whatever whenever statement is defined.
|
|
|
|
| |
during development. Test cases themselves should not hang or segfault.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This essentially reverts commits a772624b1 and 04fbe0e45, which
added "_configthreadlocale(_ENABLE_PER_THREAD_LOCALE)" calls to the
thread-related ecpg test programs. That was nothing but a hack,
because we shouldn't expect that ecpg-using applications have
done that for us; and now that we've inserted such calls into
ecpglib, the tests should still pass without it.
(If they don't, it would be good to know that.)
HEAD only; there seems no big need to change this in the
back branches.
Discussion: https://postgr.es/m/22937.1548307384@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A report from Andrew Dunstan showed that an ecpglib breakage that
causes repeated query failures could lead to infinite loops in some
ecpg test scripts, because they contain "while(1)" loops with no
exit condition other than successful test completion. That might
be all right for manual testing, but it seems entirely unacceptable
for automated test environments such as our buildfarm. We don't
want buildfarm owners to have to intervene manually when a test
goes wrong.
To fix, just change all those while(1) loops to exit after at most
100 iterations (which is more than any of them expect to iterate).
This seems sufficient since we'd see discrepancies in the test output
if any loop executed the wrong number of times.
I tested this by dint of intentionally breaking ecpg_do_prologue
to always fail, and verifying that the tests still got to completion.
Back-patch to all supported branches, since the whole point of this
exercise is to protect the buildfarm against future mistakes.
Discussion: https://postgr.es/m/18693.1548302004@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
| |
Apparently, some builds of MinGW contain a version of
_configthreadlocale() that always returns -1, indicating failure.
Rather than treating that as a curl-up-and-die condition, soldier on
as though the function didn't exist. This leaves us without thread
safety on such MinGW versions, but we didn't have it anyway.
Discussion: https://postgr.es/m/d06a16bc-52d6-9f0d-2379-21242d7dbe81@2ndQuadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
While Windows (allegedly) has _configthreadlocale() pretty far back,
it seems MinGW didn't acquire support for that till more recently.
Fortunately, we can use an autoconf probe on that toolchain,
instead of guessing whether it's there. (Hm, I wonder whether Cygwin
will need this also.)
Per buildfarm.
Discussion: https://postgr.es/m/20190121193512.tdmcnic2yjxlufaw@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ecpglib attempts to force the LC_NUMERIC locale to "C" while reading
server output, to avoid problems with strtod() and related functions.
Historically it's just issued setlocale() calls to do that, but that
has major problems if we're in a threaded application. setlocale()
itself is not required by POSIX to be thread-safe (and indeed is not,
on recent OpenBSD). Moreover, its effects are process-wide, so that
we could cause unexpected results in other threads, or another thread
could change our setting.
On platforms having uselocale(), which is required by POSIX:2008,
we can avoid these problems by using uselocale() instead. Windows
goes its own way as usual, but we can make it safe by using
_configthreadlocale(). Platforms having neither continue to use the
old code, but that should be pretty much nobody among current systems.
This should get back-patched, but let's see what the buildfarm
thinks of it first.
Michael Meskes and Tom Lane; thanks also to Takayuki Tsunakawa.
Discussion: https://postgr.es/m/31420.1547783697@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extends the COPY FROM command with a WHERE condition, which allows doing
various types of filtering while importing the data (random sampling,
condition on a data column, etc.). Until now such filtering required
either preprocessing of the input data, or importing all data and then
filtering in the database. COPY FROM ... WHERE is an easy-to-use and
low-overhead alternative for most simple cases.
Author: Surafel Temesgen
Reviewed-by: Tomas Vondra, Masahiko Sawada, Lim Myungkyu
Discussion: https://www.postgresql.org/message-id/flat/CALAY4q_DdpWDuB5-Zyi-oTtO2uSk8pmy+dupiRe3AvAc++1imA@mail.gmail.com
|
|
|
|
|
|
| |
Commit c0d0e54084 replaced the ones in the documentation, but missed out
on the ones in the code. Replace those as well, but unlike c0d0e54084,
don't backpatch the code changes to avoid breaking translations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've been speculating for a long time that hash-based keyword lookup
ought to be faster than binary search, but up to now we hadn't found
a suitable tool for generating the hash function. Joerg Sonnenberger
provided the inspiration, and sample code, to show us that rolling our
own generator wasn't a ridiculous idea. Hence, do that.
The method used here requires a lookup table of approximately 4 bytes
per keyword, but that's less than what we saved in the predecessor commit
afb0d0712, so it's not a big problem. The time savings is indeed
significant: preliminary testing suggests that the total time for raw
parsing (flex + bison phases) drops by ~20%.
Patch by me, but it owes its existence to Joerg Sonnenberger;
thanks also to John Naylor for review.
Discussion: https://postgr.es/m/20190103163340.GA15803@britannica.bec.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, ScanKeywordLookup was passed an array of string pointers.
This had some performance deficiencies: the strings themselves might
be scattered all over the place depending on the compiler (and some
quick checking shows that at least with gcc-on-Linux, they indeed
weren't reliably close together). That led to very cache-unfriendly
behavior as the binary search touched strings in many different pages.
Also, depending on the platform, the string pointers might need to
be adjusted at program start, so that they couldn't be simple constant
data. And the ScanKeyword struct had been designed with an eye to
32-bit machines originally; on 64-bit it requires 16 bytes per
keyword, making it even more cache-unfriendly.
Redesign so that the keyword strings themselves are allocated
consecutively (as part of one big char-string constant), thereby
eliminating the touch-lots-of-unrelated-pages syndrome. And get
rid of the ScanKeyword array in favor of three separate arrays:
uint16 offsets into the keyword array, uint16 token codes, and
uint8 keyword categories. That reduces the overhead per keyword
to 5 bytes instead of 16 (even less in programs that only need
one of the token codes and categories); moreover, the binary search
only touches the offsets array, further reducing its cache footprint.
This also lets us put the token codes somewhere else than the
keyword strings are, which avoids some unpleasant build dependencies.
While we're at it, wrap the data used by ScanKeywordLookup into
a struct that can be treated as an opaque type by most callers.
That doesn't change things much right now, but it will make it
less painful to switch to a hash-based lookup method, as is being
discussed in the mailing list thread.
Most of the change here is associated with adding a generator
script that can build the new data structure from the same
list-of-PG_KEYWORD header representation we used before.
The PG_KEYWORD lists that plpgsql and ecpg used to embed in
their scanner .c files have to be moved into headers, and the
Makefiles have to be taught to invoke the generator script.
This work is also necessary if we're to consider hash-based lookup,
since the generator script is what would be responsible for
constructing a hash table.
Aside from saving a few kilobytes in each program that includes
the keyword table, this seems to speed up raw parsing (flex+bison)
by a few percent. So it's worth doing even as it stands, though
we think we can gain even more with a follow-on patch to switch
to hash-based lookup.
John Naylor, with further hacking by me
Discussion: https://postgr.es/m/CAJVSVGXdFVU2sgym89XPL=Lv1zOS5=EHHQ8XWNzFL=mTXkKMLw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's important for link commands to list *.o input files before -l
switches for libraries, as library code may not get pulled into the link
unless referenced by an earlier command-line entry. This is certainly
necessary for static libraries (.a style). Apparently on some platforms
it is also necessary for shared libraries, as reported by Donald Dong.
We often put -l switches for within-tree libraries into LDFLAGS, meaning
that link commands that list *.o files after LDFLAGS are hazardous.
Most of our link commands got this right, but a few did not. In
particular, places that relied on gmake's default implicit link rule
failed, because that puts LDFLAGS first. Fix that by overriding the
built-in rule with our own. The implicit link rules in
src/makefiles/Makefile.* for single-.o-file shared libraries mostly
got this wrong too, so fix them. I also changed the link rules for the
backend and a couple of other places for consistency, even though they
are not (currently) at risk because they aren't adding any -l switches
to LDFLAGS.
Arguably, the real problem here is that we're abusing LDFLAGS by
putting -l switches in it and we should stop doing that. But changing
that would be quite invasive, so I'm not eager to do so.
Perhaps this is a candidate for back-patching, but so far it seems
that problems can only be exhibited in test code we don't normally
build, and at least some of the problems are new in HEAD anyway.
So I'll refrain for now.
Donald Dong and Tom Lane
Discussion: https://postgr.es/m/CAKABAquXn-BF-vBeRZxhzvPyfMqgGuc74p8BmQZyCFDpyROBJQ@mail.gmail.com
|
|
|
|
| |
Backpatch-through: certain files through 9.4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Re-making ecpglib's typename.o is dangerous because another make thread
could be doing that at the same time. While we've not heard field
complaints traceable to this, it seems inevitable that it'd bite someone
eventually. Instead, symlink typename.c into the preproc directory and
recompile it there. That file is small enough that compiling it twice
isn't much of a penalty. Furthermore, this way we get a .o file that's
made without shlib CFLAGS, which seems cleaner.
This requires adding more stuff to the module's -I list. The MSVC
aspect of that is untested, but I'm sure the buildfarm will tell me
if I got it wrong.
Per a suggestion from Peter Eisentraut. Although this is theoretically
a bug fix, the lack of field reports makes me feel we needn't back-patch.
Discussion: https://postgr.es/m/31364.1543511708@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
| |
This should reduce confusion, and in particular make it safe to
copy typename.c into preproc/ and compile it there.
This doesn't affect anything outside ecpg, and particularly not
end users, because these files don't get installed; they just
exist to share declarations among the .c files of each subdirectory.
Discussion: https://postgr.es/m/31364.1543511708@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit ecfd55795, I removed sqlda.c's checks for ndigits != 0 on the
grounds that we should duplicate the state of the numeric value's digit
buffer even when all the digits are zeroes. However, that still isn't
quite right, because another possible state of the digit buffer is
buf == digits == NULL (this occurs for a NaN). As the code now stands,
it'll invoke memcpy with a NULL source address and zero bytecount,
which we know a few platforms crash on. Hence, reinstate the no-copy
short-circuit, but make it test specifically for buf != NULL rather than
some other condition. In hindsight, the ndigits test (added by commit
f2ae9f9c3) was almost certainly meant to fix the NaN case not the
all-zeroes case as the associated thread alleged.
As before, back-patch to all supported versions.
Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905C71161@g01jpexmbkw24
|
|
|
|
|
|
|
|
|
|
| |
Numeric values with leading zeroes were incorrectly copied into a
SQLDA (SQL Descriptor Area), leading to wrong results in ECPG programs.
Report and patch by Daisuke Higuchi. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905C71161@g01jpexmbkw24
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The realfail1 and realfail2 backup-prevention rules always returned
token type FCONST, ignoring the possibility that what we've scanned
is more appropriately described as ICONST. I think that at the
time that code was added, it might actually have been safe to not
distinguish; but since we started allowing AS-less aliases in SELECT
target lists, it's definitely legal to have a number immediately
followed by an identifier.
In the SELECT case, it seems there's no visible consequence because
make_const() will change the type back to integer anyway. But I'm
worried that there are other contexts, or will be in future, where
it's more important to get the constant's type right.
Hence, use process_integer_literal to correctly determine which
token type to return.
Arguably this is a bug fix, but given the lack of evidence of
user-visible problems, I'll refrain from back-patching.
Discussion: https://postgr.es/m/21364.1542136808@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
| |
scanner_init/scanner_finish weren't actually called from anywhere,
and the scanbuf variables they set up weren't used either.
Remove unused declaration for mm_realloc, too.
John Naylor
Discussion: https://postgr.es/m/CAJVSVGWGqY9YBs2EwtRUkbNv=hXkN8yRPOoD1wxE6COgvvrz5g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make a bunch of basically-cosmetic changes to reduce the diffs between
the flex rules in scan.l, psqlscan.l, and pgc.l. Reorder some code,
adjust a lot of whitespace, sync some comments, make use of flex start
condition scopes to do that.
There are a few non-cosmetic changes in the ECPG lexer:
* Bring over the decimalfail rule (and support function
process_integer_literal) so that ECPG will lex "1..10" into
the same tokens as the backend would. I'm not sure this makes any
visible difference to users, but I'm not sure it doesn't, either.
* <xdc><<EOF>> gets its own rule so as to produce a more on-point
error message.
* Remove duplicate <SQL>{xdstart} rule.
John Naylor, with a few additional changes by me
Discussion: https://postgr.es/m/CAJVSVGWGqY9YBs2EwtRUkbNv=hXkN8yRPOoD1wxE6COgvvrz5g@mail.gmail.com
|
|
|
|
| |
Author: Daniel Gustafsson <daniel@yesql.se>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
|
|
|
|
|
|
| |
Per research by Andres.
Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid allocating never-used entries in stmtCacheEntries[], other than the
intentionally-unused zero'th entry. Tie the array size directly to the
bucket count and size, rather than having undocumented dependencies between
three magic constants. Fix the hash calculation to be platform-independent
--- notably, it was sensitive to the signed'ness of "char" before, not to
mention having an unnecessary hard-wired dependency on the existence and
size of type "long long". (The lack of complaints says it's been a long
time since anybody tried to build PG on a compiler without "long long",
and certainly with the requirement for C99 this isn't a live bug anymore.
But it's still not per project coding style.) Fix ecpg_auto_prepare's
new-cache-entry path so that it increments the exec count for the new
cache entry not the dummy zero'th entry.
The last of those is an actual bug, though one of little consequence;
the rest is mostly future-proofing and neatnik-ism. Doesn't seem
necessary to back-patch.
|
|
|
|
|
|
|
|
| |
This removes a megabyte of storage that isn't used at all in ecpglib's
default operating mode --- you have to enable auto-prepare to get any
use out of it. Seems well worth the trouble to allocate on demand.
Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
|
|
|
|
|
|
| |
Looking at this code made my head hurt. Format the comments more
like the way it's done elsewhere, break a few overly long lines.
No actual code changes in this commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removing the separate Windows expected-files in commit f1885386f
turns out to have been too optimistic: on most (but not all!) of our
Windows buildfarm members, the tests still print floats with three
exponent digits, because they're invoking the native printf()
not snprintf.c.
But rather than put back the extra expected-files, let's hack
the three tests in question so that they adjust float formatting
the same way snprintf.c does.
Discussion: https://postgr.es/m/18890.1539374107@sss.pgh.pa.us
|
|
|
|
|
|
|
| |
I missed this in my prior commit because it doesn't matter in non-VPATH
builds.
Per buildfarm.
|
|
|
|
|
|
|
|
|
| |
Windows, alone among our supported platforms, likes to emit three-digit
exponent fields even when two digits would do. Adjust such results to
look like the way everyone else does it. Eliminate a bunch of variant
expected-output files that were needed only because of this quirk.
Discussion: https://postgr.es/m/2934.1539122454@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
| |
These types have been deprecated for a *long* time.
Catversion bump, for obvious reasons.
Author: Andres Freund
Discussion:
https://postgr.es/m/20181009192237.34wjp3nmw7oynmmr@alap3.anarazel.de
https://postgr.es/m/20171213080506.cwjkpcz3bkk6yz2u@alap3.anarazel.de
https://postgr.es/m/25615.1513115237@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Build a third version of libpgcommon.a, with -fPIC and -DFRONTEND,
as commit ea53100d5 did for src/port. Use that in libpq to avoid
symlinking+rebuilding source files retail.
Also adjust ecpg to use the new src/port and src/common libraries.
Arrange to install these libraries, too, to simplify out-of-tree
builds of shared libraries that need any of these modules.
Discussion: https://postgr.es/m/13022.1538003440@sss.pgh.pa.us
Discussion: https://postgr.es/m/E1g5Y8r-0006vs-QA@gemulon.postgresql.org
|
|
|
|
|
|
| |
strerror.c now requires strlcpy() in some cases, and a couple of the
ecpg libraries did not have that at hand. Pull it in from src/port/
following the usual recipe. Per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
snprintf.c requires isnan(), which requires -lm on some platforms.
libpq never bothered with -lm before, but now it needs it.
strerror.c tries to translate a string or two, which requires -lintl.
We'd managed never to need that anywhere in ecpg/pgtypeslib/ before,
but now we do.
Per buildfarm and a report from Peter Eisentraut.
Discussion: https://postgr.es/m/20180926190934.ea4xvzhkayuw7gkx@alap3.anarazel.de
Discussion: https://postgr.es/m/f67b5008-9f01-057f-2bff-558cb53af851@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've spent an awful lot of effort over the years in coping with
platform-specific vagaries of the *printf family of functions. Let's just
forget all that mess and standardize on always using src/port/snprintf.c.
This gets rid of a lot of configure logic, and it will allow a saner
approach to dealing with %m (though actually changing that is left for
a follow-on patch).
Preliminary performance testing suggests that as it stands, snprintf.c is
faster than the native printf functions for some tasks on some platforms,
and slower for other cases. A pending patch will improve that, though
cases with floating-point conversions will doubtless remain slower unless
we want to put a *lot* of effort into that. Still, we've not observed
that *printf is really a performance bottleneck for most workloads, so
I doubt this matters much.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
elog.c has long had a private strerror wrapper that handles assorted
possible failures or deficiencies of the platform's strerror. On Windows,
it also knows how to translate Winsock error codes, which the native
strerror does not. Move all this code into src/port/strerror.c and
define strerror() as a macro that invokes it, so that both our frontend
and backend code will have all of this behavior.
I believe this constitutes an actual bug fix on Windows, since AFAICS
our frontend code did not report Winsock error codes properly before this.
However, the main point is to lay the groundwork for implementing %m
in src/port/snprintf.c: the behavior we want %m to have is this one,
not the native strerror's.
Note that this throws away the prior use of src/port/strerror.c,
which was to implement strerror() on platforms lacking it. That's
been dead code for nigh twenty years now, since strerror() was
already required by C89.
We should likewise cause strerror_r to use this behavior, but
I'll tackle that separately.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On many modern platforms, /etc/localtime is a symlink to a file within the
IANA database. Reading the symlink lets us find out the name of the system
timezone directly, without going through the brute-force search embodied in
scan_available_timezones(). This shortens the runtime of initdb by some
tens of ms, which is helpful for the buildfarm, and it also allows us to
reliably select the same zone name the system was actually configured for,
rather than possibly choosing one of IANA's many zone aliases. (For
example, in a system configured for "Asia/Tokyo", the brute-force search
would not choose that name but its alias "Japan", on the grounds of the
latter string being shorter. More surprisingly, "Navajo" is preferred
to either "America/Denver" or "US/Mountain", as seen in an old complaint
from Josh Berkus.)
If /etc/localtime doesn't exist, or isn't a symlink, or we can't make
sense of its contents, or the contents match a zone we know but that
zone doesn't match the observed behavior of localtime(), fall back to
the brute-force search.
Also, tweak initdb so that it prints the zone name it selected.
In passing, replace the last few references to the "Olson" database in
code comments with "IANA", as that's been our preferred term since
commit b2cbced9e.
Patch by me, per a suggestion from Robert Haas; review by Michael Paquier
Discussion: https://postgr.es/m/7408.1525812528@sss.pgh.pa.us
|
|
|
|
|
|
|
|
| |
When we removed the ecpg-specific versions, we also removed the
"(PostgreSQL)" from the --version output, which we show in other
programs.
Reported-by: Ioseph Kim <pgsql-kr@postgresql.kr>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits c6b3c939b (which fixed the precedence of >=, <=, <> operators)
and 865f14a2d (which added support for the standard => notation for
named arguments) created a class of lexer tokens which look like
multi-character operators but which have their own token IDs distinct
from Op. However, longest-match rules meant that following any of
these tokens with another operator character, as in (1<>-1), would
cause them to be incorrectly returned as Op.
The error here isn't immediately obvious, because the parser would
usually still find the correct operator via the Op token, but there
were more subtle problems:
1. If immediately followed by a comment or +-, >= <= <> would be given
the old precedence of Op rather than the correct new precedence;
2. If followed by a comment, != would be returned as Op rather than as
NOT_EQUAL, causing it not to be found at all;
3. If followed by a comment or +-, the => token for named arguments
would be lexed as Op, causing the argument to be mis-parsed as a
simple expression, usually causing an error.
Fix by explicitly checking for the operators in the {operator} code
block in addition to all the existing special cases there.
Backpatch to 9.5 where the problem was introduced.
Analysis and patch by me; review by Tom Lane.
Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lexer's handling of operators contained an O(N^3) hazard when
dealing with long strings of + or - characters; it seems hard to
prevent this case from being O(N^2), but the additional N multiplier
was not needed.
Backpatch all the way since this has been there since 7.x, and it
presents at least a mild hazard in that trying to do Bind, PREPARE or
EXPLAIN on a hostile query could take excessive time (without
honouring cancels or timeouts) even if the query was never executed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix a small number of places that were testing the result of snprintf()
but doing so incorrectly. The right test for buffer overrun, per C99,
is "result >= bufsize" not "result > bufsize". Some places were also
checking for failure with "result == -1", but the standard only says
that a negative value is delivered on failure.
(Note that this only makes these places correct if snprintf() delivers
C99-compliant results. But at least now these places are consistent
with all the other places where we assume that.)
Also, make psql_start_test() and isolation_start_test() check for
buffer overrun while constructing their shell commands. There seems
like a higher risk of overrun, with more severe consequences, here
than there is for the individual file paths that are made elsewhere
in the same functions, so this seemed like a worthwhile change.
Also fix guc.c's do_serialize() to initialize errno = 0 before
calling vsnprintf. In principle, this should be unnecessary because
vsnprintf should have set errno if it returns a failure indication ...
but the other two places this coding pattern is cribbed from don't
assume that, so let's be consistent.
These errors are all very old, so back-patch as appropriate. I think
that only the shell command overrun cases are even theoretically
reachable in practice, but there's not much point in erroneous error
checks.
Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GNUmakefile.in defined a macro "garbage" that seems to have been meant
as a suitable target for automatic "rm -rf" treatment, but it isn't
actually used anywhere (and indeed never was, AFAICT).
Moreover, we have concluded that the Makefiles shouldn't take it upon
themselves to remove files that aren't expected by-products of building,
so that doing anything like that would be against project policy anyway.
Hence, just remove the macro.
Grepping around finds another violation of that policy in ecpg/preproc,
so clean that up too.
Daniel Gustafsson (ecpg change by me)
Discussion: https://postgr.es/m/AFBEF63E-E19D-4EBB-9F08-4617CDC751ED@yesql.se
|
|
|
|
|
|
|
|
| |
A collection of typos I happened to spot while reading code, as well as
grepping for common mistakes.
Backpatch to all supported versions, as applicable, to avoid conflicts
when backporting other commits in the future.
|
|
|
|
|
|
|
| |
It appears that there are more files, whose header comment paths are
wrong. So, fix those paths. No backpatching per proposal of Tom Lane.
Discussion: https://postgr.es/m/CAPpHfdsJyYbOj59MOQL%2B4XxdcomLSLfLqBtAvwR%2BpsCqj3ELdQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We include <float.h> in every place that needs isnan(), because MSVC
used to require it. However, since MSVC 2013 that's no longer necessary
(cf. commit cec8394b5ccd), so we can retire the inclusion to a
version-specific stanza in win32_port.h, where it doesn't need to
pollute random .c files. The header is of course still needed in a few
places for other reasons.
I (Álvaro) removed float.h from a few more files than in Emre's original
patch. This doesn't break the build in my system, but we'll see what
the buildfarm has to say about it all.
Author: Emre Hasegeli
Discussion: https://postgr.es/m/CAE2gYzyc0+5uG+Cd9-BSL7NKC8LSHLNg1Aq2=8ubjnUwut4_iw@mail.gmail.com
|