aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Stamp 9.1.24.REL9_1_24Tom Lane2016-10-24
|
* Translation updatesPeter Eisentraut2016-10-24
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: fe0d5c56ab1b2599e37d41f639be97ac6947ee69
* Release notes for 9.6.1, 9.5.5, 9.4.10, 9.3.15, 9.2.19, 9.1.24.Tom Lane2016-10-23
|
* Avoid testing tuple visibility without buffer lock in RI_FKey_check().Tom Lane2016-10-23
| | | | | | | | | | | | | | | | Despite the argumentation I wrote in commit 7a2fe85b0, it's unsafe to do this, because in corner cases it's possible for HeapTupleSatisfiesSelf to try to set hint bits on the target tuple; and at least since 8.2 we have required the buffer content lock to be held while setting hint bits. The added regression test exercises one such corner case. Unpatched, it causes an assertion failure in assert-enabled builds, or otherwise would cause a hint bit change in a buffer we don't hold lock on, which given the right race condition could result in checksum failures or other data consistency problems. The odds of a problem in the field are probably pretty small, but nonetheless back-patch to all supported branches. Report: <19391.1477244876@sss.pgh.pa.us>
* Doc: wording tweak for PERL, PYTHON, TCLSH configuration variables.Tom Lane2016-10-21
| | | | | | Replace "Full path to ..." with "Full path name of ...". At least one user has misinterpreted the existing wording as meaning "Directory containing ...".
* Sync our copy of the timezone library with IANA release tzcode2016h.Tom Lane2016-10-20
| | | | | | This absorbs a fix for a symlink-manipulation bug in zic that was introduced in 2016g. It probably isn't interesting for our use-case, but I'm not quite sure, so let's update while we're at it.
* Update time zone data files to tzdata release 2016h.Tom Lane2016-10-20
| | | | | | | (Didn't I just do this? Oh well.) DST law changes in Palestine. Historical corrections for Turkey. Switch to numeric abbreviations for Asia/Colombo.
* Another portability fix for tzcode2016g update.Tom Lane2016-10-19
| | | | | clang points out that SIZE_MAX wouldn't fit into an int, which means this comparison is pretty useless. Per report from Thomas Munro.
* Windows portability fix.Tom Lane2016-10-19
| | | | Per buildfarm.
* Sync our copy of the timezone library with IANA release tzcode2016g.Tom Lane2016-10-19
| | | | | | This is mostly to absorb some corner-case fixes in zic for year-2037 timestamps. The other changes that have been made are unlikely to affect our usage, but nonetheless we may as well take 'em.
* Suppress "Factory" zone in pg_timezone_names view for tzdata >= 2016g.Tom Lane2016-10-19
| | | | | | IANA got rid of the really silly "abbreviation" and replaced it with one that's only moderately silly. But it's still pointless, so keep on not showing it.
* Update time zone data files to tzdata release 2016g.Tom Lane2016-10-19
| | | | | | | | | | | | | | | | | | | | | DST law changes in Turkey. Historical corrections for America/Los_Angeles, Europe/Kirov, Europe/Moscow, Europe/Samara, and Europe/Ulyanovsk. Rename Asia/Rangoon to Asia/Yangon, with a backward compatibility link. The IANA crew continue their campaign to replace invented time zone abbrevations with numeric GMT offsets. This update changes numerous zones in Antarctica and the former Soviet Union, for instance Antarctica/Casey now reports "+08" not "AWST" in the pg_timezone_names view. I kept these abbreviations in the tznames/ data files, however, so that we will still accept them for input. (We may want to start trimming those files someday, but today is not that day.) An exception is that since IANA no longer claims that "AMT" is in use in Armenia for GMT+4, I replaced it in the Default file with GMT-4, corresponding to Amazon Time which is in use in South America. It may be that that meaning is also invented and IANA will drop it in a future update; but for now, it seems silly to give pride of place to a meaning not traceable to IANA over one that is.
* Fix cidin() to handle values above 2^31 platform-independently.Tom Lane2016-10-18
| | | | | | | | | | | | | | | | | | | | | | CommandId is declared as uint32, and values up to 4G are indeed legal. cidout() handles them properly by treating the value as unsigned int. But cidin() was just using atoi(), which has platform-dependent behavior for values outside the range of signed int, as reported by Bart Lengkeek in bug #14379. Use strtoul() instead, as xidin() does. In passing, make some purely cosmetic changes to make xidin/xidout look more like cidin/cidout; the former didn't have a monopoly on best practice IMO. Neither xidin nor cidin make any attempt to throw error for invalid input. I didn't change that here, and am not sure it's worth worrying about since neither is really a user-facing type. The point is just to ensure that indubitably-valid inputs work as expected. It's been like this for a long time, so back-patch to all supported branches. Report: <20161018152550.1413.6439@wrigleys.postgresql.org>
* Fix assorted integer-overflow hazards in varbit.c.Tom Lane2016-10-14
| | | | | | | | | | | | | | | | | bitshiftright() and bitshiftleft() would recursively call each other infinitely if the user passed INT_MIN for the shift amount, due to integer overflow in negating the shift amount. To fix, clamp to -VARBITMAXLEN. That doesn't change the results since any shift distance larger than the input bit string's length produces an all-zeroes result. Also fix some places that seemed inadequately paranoid about input typmods exceeding VARBITMAXLEN. While a typmod accepted by anybit_typmodin() will certainly be much less than that, at least some of these spots are reachable with user-chosen integer values. Andreas Seltenreich and Tom Lane Discussion: <87d1j2zqtz.fsf@credativ.de>
* In PQsendQueryStart(), avoid leaking any left-over async result.Tom Lane2016-10-10
| | | | | | | | | | | Ordinarily there would not be an async result sitting around at this point, but it appears that in corner cases there can be. Considering all the work we're about to launch, it's hardly going to cost anything noticeable to check. It's been like this forever, so back-patch to all supported branches. Report: <CAD-Qf1eLUtBOTPXyFQGW-4eEsop31tVVdZPu4kL9pbQ6tJPO8g@mail.gmail.com>
* Clear OpenSSL error queue after failed X509_STORE_load_locations() call.Heikki Linnakangas2016-10-07
| | | | | | | | | | | | | Leaving the error in the error queue used to be harmless, because the X509_STORE_load_locations() call used to be the last step in initialize_SSL(), and we would clear the queue before the next SSL_connect() call. But previous commit moved things around. The symptom was that if a CRL file was not found, and one of the subsequent initialization steps, like loading the client certificate or private key, failed, we would incorrectly print the "no such file" error message from the earlier X509_STORE_load_locations() call as the reason. Backpatch to all supported versions, like the previous patch.
* Don't share SSL_CTX between libpq connections.Heikki Linnakangas2016-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There were several issues with the old coding: 1. There was a race condition, if two threads opened a connection at the same time. We used a mutex around SSL_CTX_* calls, but that was not enough, e.g. if one thread SSL_CTX_load_verify_locations() with one path, and another thread set it with a different path, before the first thread got to establish the connection. 2. Opening two different connections, with different sslrootcert settings, seemed to fail outright with "SSL error: block type is not 01". Not sure why. 3. We created the SSL object, before calling SSL_CTX_load_verify_locations and SSL_CTX_use_certificate_chain_file on the SSL context. That was wrong, because the options set on the SSL context are propagated to the SSL object, when the SSL object is created. If they are set after the SSL object has already been created, they won't take effect until the next connection. (This is bug #14329) At least some of these could've been fixed while still using a shared context, but it would've been more complicated and error-prone. To keep things simple, let's just use a separate SSL context for each connection, and accept the overhead. Backpatch to all supported versions. Report, analysis and test case by Kacper Zuk. Discussion: <20160920101051.1355.79453@wrigleys.postgresql.org>
* Include <sys/select.h> where neededAlvaro Herrera2016-09-27
| | | | | | | | | | | | <sys/select.h> is required by POSIX.1-2001 to get the prototype of select(2), but nearly no systems enforce that because older standards let you get away with including some other headers. Recent OpenBSD hacking has removed that frail touch of friendliness, however, which broke some compiles; fix all the way back to 9.1 by adding the required standard. Only vacuumdb.c was reported to fail, but it seems easier to fix the whole lot in a fell swoop. Per bug #14334 by Sean Farrell.
* Doc: fix examples of # operators so they actually work.Tom Lane2016-09-23
| | | | | | | | These worked as-is until around 7.0, but fail in newer versions because there are more operators named "#". Besides it's a bit inconsistent that only two of the examples on this page lack type names on their constants. Report: <20160923081530.1517.75670@wrigleys.postgresql.org>
* Be sure to rewind the tuplestore read pointer in non-leader CTEScan nodes.Tom Lane2016-09-22
| | | | | | | | | | | | | | | | | | | | | ExecInitCteScan supposed that it didn't have to do anything to the extra tuplestore read pointer it gets from tuplestore_alloc_read_pointer. However, it needs this read pointer to be positioned at the start of the tuplestore, while tuplestore_alloc_read_pointer is actually defined as cloning the current position of read pointer 0. In normal situations that accidentally works because we initialize the whole plan tree at once, before anything gets read. But it fails in an EvalPlanQual recheck, as illustrated in bug #14328 from Dima Pavlov. To fix, just forcibly rewind the pointer after tuplestore_alloc_read_pointer. The cost of doing so is negligible unless the tuplestore is already in TSS_READFILE state, which wouldn't happen in normal cases. We could consider altering tuplestore's API to make that case cheaper, but that would make for a more invasive back-patch and it doesn't seem worth it. This has been broken probably for as long as we've had CTEs, so back-patch to all supported branches. Discussion: <32468.1474548308@sss.pgh.pa.us>
* doc: Fix documentation to match actual make outputPeter Eisentraut2016-09-20
| | | | based on patch from Takeshi Ideriha <iderihatakeshi@gmail.com>
* doc: Correct ALTER USER MAPPING examplePeter Eisentraut2016-09-20
| | | | | | The existing example threw an error. From: gabrielle <gorthx@gmail.com>
* Fix ecpg -? option on Windows, add -V alias for --version.Heikki Linnakangas2016-09-18
| | | | | | | | | | | | | This makes the -? and -V options work consistently with other binaries. --help and --version are now only recognized as the first option, i.e. "ecpg --foobar --help" no longer prints the help, but that's consistent with most of our other binaries, too. Backpatch to all supported versions. Haribabu Kommi Discussion: <CAJrrPGfnRXvmCzxq6Dy=stAWebfNHxiL+Y_z7uqksZUCkW_waQ@mail.gmail.com>
* Fix VACUUM_TRUNCATE_LOCK_WAIT_INTERVALSimon Riggs2016-09-09
| | | | | | | | | | lazy_truncate_heap() was waiting for VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL, but in microseconds not milliseconds as originally intended. Found by code inspection. Simon Riggs
* Fix mdtruncate() to close fd.c handle of deleted segments.Andres Freund2016-09-08
| | | | | | | | | | | | | | | | | mdtruncate() forgot to FileClose() a segment's mdfd_vfd, when deleting it. That lead to a fd.c handle to a truncated file being kept open until backend exit. The issue appears to have been introduced way back in 1a5c450f3024ac5, before that the handle was closed inside FileUnlink(). The impact of this bug is limited - only VACUUM and ON COMMIT TRUNCATE for temporary tables, truncate files in place (i.e. TRUNCATE itself is not affected), and the relation has to be bigger than 1GB. The consequences of a leaked fd.c handle aren't severe either. Discussion: <20160908220748.oqh37ukwqqncbl3n@alap3.anarazel.de> Backpatch: all supported releases
* Add regression test coverage for non-default timezone abbreviation sets.Tom Lane2016-09-04
| | | | | | | | | | After further reflection about the mess cleaned up in commit 39b691f25, I decided the main bit of test coverage that was still missing was to check that the non-default abbreviation-set files we supply are usable. Add that. Back-patch to supported branches, just because it seems like a good idea to keep this all in sync.
* Remove vestigial references to "zic" in favor of "IANA database".Tom Lane2016-09-04
| | | | | | | | | | | | Commit b2cbced9e instituted a policy of referring to the timezone database as the "IANA timezone database" in our user-facing documentation. Propagate that wording into a couple of places that were still using "zic" to refer to the database, which is definitely not right (zic is the compilation tool, not the data). Back-patch, not because this is very important in itself, but because we routinely cherry-pick updates to the tznames files and I don't want to risk future merge failures.
* Don't require dynamic timezone abbreviations to match underlying time zone.Tom Lane2016-09-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we threw an error if a dynamic timezone abbreviation did not match any abbreviation recorded in the referenced IANA time zone entry. That seemed like a good consistency check at the time, but it turns out that a number of the abbreviations in the IANA database are things that Olson and crew made up out of whole cloth. Their current policy is to remove such names in favor of using simple numeric offsets. Perhaps unsurprisingly, a lot of these made-up abbreviations have varied in meaning over time, which meant that our commit b2cbced9e and later changes made them into dynamic abbreviations. So with newer IANA database versions that don't mention these abbreviations at all, we fail, as reported in bug #14307 from Neil Anderson. It's worse than just a few unused-in-the-wild abbreviations not working, because the pg_timezone_abbrevs view stops working altogether (since its underlying function tries to compute the whole view result in one call). We considered deleting these abbreviations from our abbreviations list, but the problem with that is that we can't stay ahead of possible future IANA changes. Instead, let's leave the abbreviations list alone, and treat any "orphaned" dynamic abbreviation as just meaning the referenced time zone. It will behave a bit differently than it used to, in that you can't any longer override the zone's standard vs. daylight rule by using the "wrong" abbreviation of a pair, but that's better than failing entirely. (Also, this solution can be interpreted as adding a small new feature, which is that any abbreviation a user wants can be defined as referencing a time zone name.) Back-patch to all supported branches, since this problem affects all of them when using tzdata 2016f or newer. Report: <20160902031551.15674.67337@wrigleys.postgresql.org> Discussion: <6189.1472820913@sss.pgh.pa.us>
* Prevent starting a standalone backend with standby_mode on.Tom Lane2016-08-31
| | | | | | | | | | | | | | | | | | | This can't really work because standby_mode expects there to be more WAL arriving, which there will not ever be because there's no WAL receiver process to fetch it. Moreover, if standby_mode is on then hot standby might also be turned on, causing even more strangeness because that expects read-only sessions to be executing in parallel. Bernd Helmle reported a case where btree_xlog_delete_get_latestRemovedXid got confused, but rather than band-aiding individual problems it seems best to prevent getting anywhere near this state in the first place. Back-patch to all supported branches. In passing, also fix some omissions of errcodes in other ereport's in readRecoveryCommandFile(). Michael Paquier (errcode hacking by me) Discussion: <00F0B2CEF6D0CEF8A90119D4@eje.credativ.lan>
* Fix instability in parallel regression tests.Tom Lane2016-08-25
| | | | | | | | | | | | | | Commit f0c7b789a added a test case in case.sql that creates and then drops both an '=' operator and the type it's for. Given the right timing, that can cause a "cache lookup failed for type" failure in concurrent sessions, which see the '=' operator as a potential match for '=' in a query, but then the type is gone by the time they inquire into its properties. It might be nice to make that behavior more robust someday, but as a back-patchable solution, adjust the new test case so that the operator is never visible to other sessions. Like the previous commit, back-patch to all supported branches. Discussion: <5983.1471371667@sss.pgh.pa.us>
* Fix improper repetition of previous results from a hashed aggregate.Tom Lane2016-08-24
| | | | | | | | | | | | | | | | | | | | | | ExecReScanAgg's check for whether it could re-use a previously calculated hashtable neglected the possibility that the Agg node might reference PARAM_EXEC Params that are not referenced by its input plan node. That's okay if the Params are in upper tlist or qual expressions; but if one appears in aggregate input expressions, then the hashtable contents need to be recomputed when the Param's value changes. To avoid unnecessary performance degradation in the case of a Param that isn't within an aggregate input, add logic to the planner to determine which Params are within aggregate inputs. This requires a new field in struct Agg, but fortunately we never write plans to disk, so this isn't an initdb-forcing change. Per report from Jeevan Chalke. This has been broken since forever, so back-patch to all supported branches. Andrew Gierth, with minor adjustments by me Report: <CAM2+6=VY8ykfLT5Q8vb9B6EbeBk-NGuLbT6seaQ+Fq4zXvrDcA@mail.gmail.com>
* Fix -e option in contrib/intarray/bench/bench.pl.Tom Lane2016-08-17
| | | | | | | | | | As implemented, -e ran an EXPLAIN but then discarded the output, which certainly seems pointless. Make it print to stdout instead. It's been like that forever, so back-patch to all supported branches. Daniel Gustafsson, reviewed by Andreas Scherbaum Patch: <B97BDCB7-A3B3-4734-90B5-EDD586941629@yesql.se>
* Remove bogus dependencies on NUMERIC_MAX_PRECISION.Tom Lane2016-08-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NUMERIC_MAX_PRECISION is a purely arbitrary constraint on the precision and scale you can write in a numeric typmod. It might once have had something to do with the allowed range of a typmod-less numeric value, but at least since 9.1 we've allowed, and documented that we allowed, any value that would physically fit in the numeric storage format; which is something over 100000 decimal digits, not 1000. Hence, get rid of numeric_in()'s use of NUMERIC_MAX_PRECISION as a limit on the allowed range of the exponent in scientific-format input. That was especially silly in view of the fact that you can enter larger numbers as long as you don't use 'e' to do it. Just constrain the value enough to avoid localized overflow, and let make_result be the final arbiter of what is too large. Likewise adjust ecpg's equivalent of this code. Also get rid of numeric_recv()'s use of NUMERIC_MAX_PRECISION to limit the number of base-NBASE digits it would accept. That created a dump/restore hazard for binary COPY without doing anything useful; the wire-format limit on number of digits (65535) is about as tight as we would want. In HEAD, also get rid of pg_size_bytes()'s unnecessary intimacy with what the numeric range limit is. That code doesn't exist in the back branches. Per gripe from Aravind Kumar. Back-patch to all supported branches, since they all contain the documentation claim about allowed range of NUMERIC (cf commit cabf5d84b). Discussion: <2895.1471195721@sss.pgh.pa.us>
* Fix regression test parallel-make hazard.Tom Lane2016-08-12
| | | | | | | | | | | | | | | | Back-patch 9.4-era commit 384f933046dc9e9a2b416f5f7b3be30b93587c63 into the previous branches. Although that was only advertised as repairing a problem with missed header-file dependencies, it turns out to also be important for parallel make safety. The previous coding allowed two independent make jobs to get launched concurrently in contrib/spi. Normally this would be OK, because they are building independent targets; but if --enable-depend is in use, it's unsafe, because one make run might try to read a .deps file that the other one is in process of rewriting. This is evidently the cause of buildfarm member francolin's recent failure in the 9.2 branch. I believe this patch will result in only one subsidiary make run, making it safe(r). Report: http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2016-08-12%2017%3A12%3A52
* Doc: fix bad link in 9.1 branch only.Tom Lane2016-08-11
| | | | | This table apparently got renamed somewhere between 9.1 and 9.2. Per buildfarm.
* Doc: write some for adminpack.Tom Lane2016-08-10
| | | | | Previous contents of adminpack.sgml were rather far short of project norms. Not to mention being outright wrong about the signature of pg_file_read().
* Fix typoPeter Eisentraut2016-08-09
|
* Doc: clarify description of CREATE/ALTER FUNCTION ... SET FROM CURRENT.Tom Lane2016-08-09
| | | | Per discussion with David Johnston.
* Stamp 9.1.23.REL9_1_23Tom Lane2016-08-08
|
* Last-minute updates for release notes.Tom Lane2016-08-08
| | | | Security: CVE-2016-5423, CVE-2016-5424
* Fix several one-byte buffer over-reads in to_numberPeter Eisentraut2016-08-08
| | | | | | | | | | | | | | | | | | | | | | | | | Several places in NUM_numpart_from_char(), which is called from the SQL function to_number(text, text), could accidentally read one byte past the end of the input buffer (which comes from the input text datum and is not null-terminated). 1. One leading space character would be skipped, but there was no check that the input was at least one byte long. This does not happen in practice, but for defensiveness, add a check anyway. 2. Commit 4a3a1e2cf apparently accidentally doubled that code that skips one space character (so that two spaces might be skipped), but there was no overflow check before skipping the second byte. Fix by removing that duplicate code. 3. A logic error would allow a one-byte over-read when looking for a trailing sign (S) placeholder. In each case, the extra byte cannot be read out directly, but looking at it might cause a crash. The third item was discovered by Piotr Stefaniak, the first two were found and analyzed by Tom Lane and Peter Eisentraut.
* Translation updatesPeter Eisentraut2016-08-08
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: bd56d09b3b4cc9f2b6def7e64b3a8842460c1bf0
* Fix two errors with nested CASE/WHEN constructs.Tom Lane2016-08-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ExecEvalCase() tried to save a cycle or two by passing &econtext->caseValue_isNull as the isNull argument to its sub-evaluation of the CASE value expression. If that subexpression itself contained a CASE, then *isNull was an alias for econtext->caseValue_isNull within the recursive call of ExecEvalCase(), leading to confusion about whether the inner call's caseValue was null or not. In the worst case this could lead to a core dump due to dereferencing a null pointer. Fix by not assigning to the global variable until control comes back from the subexpression. Also, avoid using the passed-in isNull pointer transiently for evaluation of WHEN expressions. (Either one of these changes would have been sufficient to fix the known misbehavior, but it's clear now that each of these choices was in itself dangerous coding practice and best avoided. There do not seem to be any similar hazards elsewhere in execQual.c.) Also, it was possible for inlining of a SQL function that implements the equality operator used for a CASE comparison to result in one CASE expression's CaseTestExpr node being inserted inside another CASE expression. This would certainly result in wrong answers since the improperly nested CaseTestExpr would be caused to return the inner CASE's comparison value not the outer's. If the CASE values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. To fix, teach inline_function to check for "bare" CaseTestExpr nodes in the arguments of a function to be inlined, and avoid inlining if there are any. Heikki Linnakangas, Michael Paquier, Tom Lane Report: https://github.com/greenplum-db/gpdb/pull/327 Report: <4DDCEEB8.50602@enterprisedb.com> Security: CVE-2016-5423
* Obstruct shell, SQL, and conninfo injection via database and role names.Noah Misch2016-08-08
| | | | | | | | | | | | | | | | Due to simplistic quoting and confusion of database names with conninfo strings, roles with the CREATEDB or CREATEROLE option could escalate to superuser privileges when a superuser next ran certain maintenance commands. The new coding rule for PQconnectdbParams() calls, documented at conninfo_array_parse(), is to pass expand_dbname=true and wrap literal database names in a trivial connection string. Escape zero-length values in appendConnStrVal(). Back-patch to 9.1 (all supported versions). Nathan Bossart, Michael Paquier, and Noah Misch. Reviewed by Peter Eisentraut. Reported by Nathan Bossart. Security: CVE-2016-5424
* Promote pg_dumpall shell/connstr quoting functions to src/fe_utils.Noah Misch2016-08-08
| | | | | | | | | | Rename these newly-extern functions with terms more typical of their new neighbors. No functional changes; a subsequent commit will use them in more places. Back-patch to 9.1 (all supported versions). Back branches lack src/fe_utils, so instead rename the functions in place; the subsequent commit will copy them into the other programs using them. Security: CVE-2016-5424
* Back-patch "Only quote libpq connection string values that need quoting."Noah Misch2016-08-08
| | | | | | | | | Back-patch commit 2953cd6d17210935098c803c52c6df5b12a725b9 and certain runPgDump() bits of 3dee636e0404885d07885d41c0d70e50c784f324 to 9.2 and 9.1. This synchronizes their doConnStrQuoting() implementations with later releases. Subsequent security patches will modify that function. Security: CVE-2016-5424
* Fix Windows shell argument quoting.Noah Misch2016-08-08
| | | | | | | | | The incorrect quoting may have permitted arbitrary command execution. At a minimum, it gave broader control over the command line to actors supposed to have control over a single argument. Back-patch to 9.1 (all supported versions). Security: CVE-2016-5424
* Reject, in pg_dumpall, names containing CR or LF.Noah Misch2016-08-08
| | | | | | | | | | | | | | | | | | | | | | | | These characters prematurely terminate Windows shell command processing, causing the shell to execute a prefix of the intended command. The chief alternative to rejecting these characters was to bypass the Windows shell with CreateProcess(), but the ability to use such names has little value. Back-patch to 9.1 (all supported versions). This change formally revokes support for these characters in database names and roles names. Don't document this; the error message is self-explanatory, and too few users would benefit. A future major release may forbid creation of databases and roles so named. For now, check only at known weak points in pg_dumpall. Future commits will, without notice, reject affected names from other frontend programs. Also extend the restriction to pg_dumpall --dbname=CONNSTR arguments and --file arguments. Unlike the effects on role name arguments and database names, this does not reflect a broad policy change. A migration to CreateProcess() could lift these two restrictions. Reviewed by Peter Eisentraut. Security: CVE-2016-5424
* Field conninfo strings throughout src/bin/scripts.Noah Misch2016-08-08
| | | | | | | | | | | | | | | | | | | | | These programs nominally accepted conninfo strings, but they would proceed to use the original dbname parameter as though it were an unadorned database name. This caused "reindexdb dbname=foo" to issue an SQL command that always failed, and other programs printed a conninfo string in error messages that purported to print a database name. Fix both problems by using PQdb() to retrieve actual database names. Continue to print the full conninfo string when reporting a connection failure. It is informative there, and if the database name is the sole problem, the server-side error message will include the name. Beyond those user-visible fixes, this allows a subsequent commit to synthesize and use conninfo strings without that implementation detail leaking into messages. As a side effect, the "vacuuming database" message now appears after, not before, the connection attempt. Back-patch to 9.1 (all supported versions). Reviewed by Michael Paquier and Peter Eisentraut. Security: CVE-2016-5424
* Introduce a psql "\connect -reuse-previous=on|off" option.Noah Misch2016-08-08
| | | | | | | | | | | | The decision to reuse values of parameters from a previous connection has been based on whether the new target is a conninfo string. Add this means of overriding that default. This feature arose as one component of a fix for security vulnerabilities in pg_dump, pg_dumpall, and pg_upgrade, so back-patch to 9.1 (all supported versions). In 9.3 and later, comment paragraphs that required update had already-incorrect claims about behavior when no connection is open; fix those problems. Security: CVE-2016-5424