aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Attach ON CONFLICT SET ... WHERE to the correct planstate.Andres Freund2015-05-19
| | | | | | | | | | The previous coding was a leftover from attempting to hang all the on conflict logic onto modify table's child nodes. It appears to not have actually caused problems except for explain. Add test exercising the broken and some other code paths. Author: Peter Geoghegan and Andres Freund
* Put back a backwards-compatible version of sampling support functions.Tom Lane2015-05-18
| | | | | | | | | Commit 83e176ec18d2a91dbea1d0d1bd94c38dc47cd77c removed the longstanding support functions for block sampling without any consideration of the impact this would have on third-party FDWs. The new API is not notably more functional for FDWs than the old, so forcing them to change doesn't seem like a good thing. We can provide the old API as a wrapper (more or less) around the new one for a minimal amount of extra code.
* Recognize "REGRESS_OPTS += ..." syntax in MSVC build scripts.Tom Lane2015-05-18
| | | | | Necessitated by commit b14cf229f4bd7238be2e31d873dc5dd241d3871e. Per buildfarm.
* Fix error message in pre_sync_fname.Robert Haas2015-05-18
| | | | | | | The old one didn't include %m anywhere, and required extra translation. Report by Peter Eisentraut. Fix by me. Review by Tom Lane.
* Last-minute updates for release notes.Tom Lane2015-05-18
| | | | | | Add entries for security issues. Security: CVE-2015-3165 through CVE-2015-3167
* pgcrypto: Report errant decryption as "Wrong key or corrupt data".Noah Misch2015-05-18
| | | | | | | | | | | | | | | | | This has been the predominant outcome. When the output of decrypting with a wrong key coincidentally resembled an OpenPGP packet header, pgcrypto could instead report "Corrupt data", "Not text data" or "Unsupported compression algorithm". The distinct "Corrupt data" message added no value. The latter two error messages misled when the decrypted payload also exhibited fundamental integrity problems. Worse, error message variance in other systems has enabled cryptologic attacks; see RFC 4880 section "14. Security Considerations". Whether these pgcrypto behaviors are likewise exploitable is unknown. In passing, document that pgcrypto does not resist side-channel attacks. Back-patch to 9.0 (all supported versions). Security: CVE-2015-3167
* Check return values of sensitive system library calls.Noah Misch2015-05-18
| | | | | | | | | | | | | | | | | | | | | | PostgreSQL already checked the vast majority of these, missing this handful that nearly cannot fail. If putenv() failed with ENOMEM in pg_GSS_recvauth(), authentication would proceed with the wrong keytab file. If strftime() returned zero in cache_locale_time(), using the unspecified buffer contents could lead to information exposure or a crash. Back-patch to 9.0 (all supported versions). Other unchecked calls to these functions, especially those in frontend code, pose negligible security concern. This patch does not address them. Nonetheless, it is always better to check return values whose specification provides for indicating an error. In passing, fix an off-by-one error in strftime_win32()'s invocation of WideCharToMultiByte(). Upon retrieving a value of exactly MAX_L10N_DATA bytes, strftime_win32() would overrun the caller's buffer by one byte. MAX_L10N_DATA is chosen to exceed the length of every possible value, so the vulnerable scenario probably does not arise. Security: CVE-2015-3166
* Add error-throwing wrappers for the printf family of functions.Noah Misch2015-05-18
| | | | | | | | | | | | | | | | | | | | | | | All known standard library implementations of these functions can fail with ENOMEM. A caller neglecting to check for failure would experience missing output, information exposure, or a crash. Check return values within wrappers and code, currently just snprintf.c, that bypasses the wrappers. The wrappers do not return after an error, so their callers need not check. Back-patch to 9.0 (all supported versions). Popular free software standard library implementations do take pains to bypass malloc() in simple cases, but they risk ENOMEM for floating point numbers, positional arguments, large field widths, and large precisions. No specification demands such caution, so this commit regards every call to a printf family function as a potential threat. Injecting the wrappers implicitly is a compromise between patch scope and design goals. I would prefer to edit each call site to name a wrapper explicitly. libpq and the ECPG libraries would, ideally, convey errors to the caller rather than abort(). All that would be painfully invasive for a back-patched security fix, hence this compromise. Security: CVE-2015-3166
* Permit use of vsprintf() in PostgreSQL code.Noah Misch2015-05-18
| | | | The next commit needs it. Back-patch to 9.0 (all supported versions).
* Prevent a double free by not reentering be_tls_close().Noah Misch2015-05-18
| | | | | | | | | | | | Reentering this function with the right timing caused a double free, typically crashing the backend. By synchronizing a disconnection with the authentication timeout, an unauthenticated attacker could achieve this somewhat consistently. Call be_tls_close() solely from within proc_exit_prepare(). Back-patch to 9.0 (all supported versions). Benkocs Norbert Attila Security: CVE-2015-3165
* Fix typo in comment.Heikki Linnakangas2015-05-18
| | | | Jim Nasby
* Put back stats-collector restarting code, removed accidentally.Heikki Linnakangas2015-05-18
| | | | | | | | Removed that code snippet accidentally in the archive_mode='always' patch. Also, use varname-tags for archive_command in the docs. Fujii Masao
* Don't classify REINDEX command as DDL in the pg_audit doc.Fujii Masao2015-05-18
| | | | The commit a936743 changed the class of REINDEX but forgot to update the doc.
* Add new files to nls.mkPeter Eisentraut2015-05-17
|
* Fix failure to copy IndexScan.indexorderbyops in copyfuncs.c.Tom Lane2015-05-17
| | | | | | | | | | | | | | | This oversight results in a crash at executor startup if the plan has been copied. outfuncs.c was missed as well. While we could probably have taught both those files to cope with the originally chosen representation of an Oid array, it would have been painful, not least because there'd be no easy way to verify the array length. An Oid List is far easier to work with. And AFAICS, there is no particular notational benefit to using an array rather than a list in the existing parts of the patch either. So just change it to a list. Error in commit 35fcb1b3d038a501f3f4c87c05630095abaaadab, which is new, so no need for back-patch.
* Use += not = to set makefile variables after including base makefiles.Tom Lane2015-05-17
| | | | | | | | The previous coding in hstore_plpython and ltree_plpython wiped out any values set by the base makefiles. This at least had the effect of running the tests in "regression" not "contrib_regression" as expected. These being pretty new modules, there might be other bad effects we'd not noticed yet.
* Release notes for 9.4.2, 9.3.7, 9.2.11, 9.1.16, 9.0.20.Tom Lane2015-05-17
|
* Fix wording error caused by recent typo fixesMagnus Hagander2015-05-17
| | | | | It wasn't just a typo, but bad wording. This should make it more clear. Pointed out by Tom Lane.
* pg_audit Makefile, REINDEX changesStephen Frost2015-05-17
| | | | | | Clean up the Makefile, per Michael Paquier. Classify REINDEX as we do in core, use '1.0' for the version, per Fujii.
* Fix typos in commentsMagnus Hagander2015-05-17
| | | | Dmitriy Olshevskiy
* Minor docs fixes for pg_auditMagnus Hagander2015-05-17
| | | | Peter Geoghegan
* hstore_plpython: Fix regression tests under Python 3Peter Eisentraut2015-05-16
|
* Fix whitespacePeter Eisentraut2015-05-16
|
* First-draft release notes for 9.4.2 et al.Tom Lane2015-05-16
| | | | | As usual, the release notes for older branches will be made by cutting these down, but put them up for community review first.
* pg_upgrade: no need to check for matching float8_pass_by_valueBruce Momjian2015-05-16
| | | | Report by Noah Misch
* Fix docs typoTom Lane2015-05-16
| | | | I don't think "respectfully" is what was meant here ...
* More portability fixing for bipartite_match.c.Tom Lane2015-05-16
| | | | <float.h> is required for isinf() on some platforms. Per buildfarm.
* pg_upgrade: force timeline 1 in the new clusterBruce Momjian2015-05-16
| | | | | | | | | | Previously, this prevented promoted standby servers from being upgraded because of a missing WAL history file. (Timeline 1 doesn't need a history file, and we don't copy WAL files anyway.) Report by Christian Echerer(?), Alexey Klyukin Backpatch through 9.0
* pg_upgrade: only allow template0 to be non-connectableBruce Momjian2015-05-16
| | | | | | | | | | | | | | | | This patch causes pg_upgrade to error out during its check phase if: (1) template0 is marked connectable or (2) any other database is marked non-connectable This is done because, in the first case, pg_upgrade would fail because the pg_dumpall --globals restore would fail, and in the second case, the database would not be restored, leading to data loss. Report by Matt Landry (1), Stephen Frost (2) Backpatch through 9.0
* Avoid direct use of INFINITY.Tom Lane2015-05-15
| | | | It's not very portable. Per buildfarm.
* Add docs for tablesample system_time()Simon Riggs2015-05-15
|
* Support GROUPING SETS, CUBE and ROLLUP.Andres Freund2015-05-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
* Add docs for tablesample system_rows()Simon Riggs2015-05-15
|
* Update time zone data files to tzdata release 2015d.Tom Lane2015-05-15
| | | | | | DST law changes in Egypt, Mongolia, Palestine. Historical corrections for Canada and Chile. Revised zone abbreviation for America/Adak (HST/HDT not HAST/HADT).
* Add BRIN infrastructure for "inclusion" opclassesAlvaro Herrera2015-05-15
| | | | | | | | | | | | | | | | This lets BRIN be used with R-Tree-like indexing strategies. Also provided are operator classes for range types, box and inet/cidr. The infrastructure provided here should be sufficient to create operator classes for similar datatypes; for instance, opclasses for PostGIS geometries should be doable, though we didn't try to implement one. (A box/point opclass was also submitted, but we ripped it out before commit because the handling of floating point comparisons in existing code is inconsistent and would generate corrupt indexes.) Author: Emre Hasegeli. Cosmetic changes by me Review: Andreas Karlsson
* Improve test for CONVERT() with GB18030 <-> UTF8.Tom Lane2015-05-15
| | | | | | Add a bit of coverage of high code points. Arjen Nienhuis
* Move strategy numbers to include/access/stratnum.hAlvaro Herrera2015-05-15
| | | | | | | | | | | | | | | | | | | | For upcoming BRIN opclasses, it's convenient to have strategy numbers defined in a single place. Since there's nothing appropriate, create it. The StrategyNumber typedef now lives there, as well as existing strategy numbers for B-trees (from skey.h) and R-tree-and-friends (from gist.h). skey.h is forced to include stratnum.h because of the StrategyNumber typedef, but gist.h is not; extensions that currently rely on gist.h for rtree strategy numbers might need to add a new A few .c files can stop including skey.h and/or gist.h, which is a nice side benefit. Per discussion: https://www.postgresql.org/message-id/20150514232132.GZ2523@alvh.no-ip.org Authored by Emre Hasegeli and Álvaro. (It's not clear to me why bootscanner.l has any #include lines at all.)
* SQLStandard feature T613 Sampling now SupportedSimon Riggs2015-05-15
|
* Fix uninitialized variable.Tom Lane2015-05-15
| | | | Per compiler warnings.
* Tablesample method API docsSimon Riggs2015-05-15
| | | | Petr Jelinek
* Add to contrib/MakefileSimon Riggs2015-05-15
|
* contrib/tsm_system_timeSimon Riggs2015-05-15
|
* contrib/tsm_system_rowsSimon Riggs2015-05-15
|
* TABLESAMPLE system_time(limit)Simon Riggs2015-05-15
| | | | | | | | | | | Contrib module implementing a tablesample method that allows you to limit the sample by a hard time limit. Petr Jelinek Reviewed by Michael Paquier, Amit Kapila and Simon Riggs
* TABLESAMPLE system_rows(limit)Simon Riggs2015-05-15
| | | | | | | | | | | Contrib module implementing a tablesample method that allows you to limit the sample by a hard row limit. Petr Jelinek Reviewed by Michael Paquier, Amit Kapila and Simon Riggs
* Extend GB18030 encoding conversion to cover full Unicode range.Tom Lane2015-05-15
| | | | | | | | | | | | | | | | | | | | | | | | Our previous code for GB18030 <-> UTF8 conversion only covered Unicode code points up to U+FFFF, but the actual spec defines conversions for all code points up to U+10FFFF. That would be rather impractical as a lookup table, but fortunately there is a simple algorithmic conversion between the additional code points and the equivalent GB18030 byte patterns. Make use of the just-added callback facility in LocalToUtf/UtfToLocal to perform the additional conversions. Having created the infrastructure to do that, we can use the same code to map certain linearly-related subranges of the Unicode space below U+FFFF, allowing removal of the corresponding lookup table entries. This more than halves the lookup table size, which is a substantial savings; utf8_and_gb18030.so drops from nearly a megabyte to about half that. In support of doing that, replace ISO10646-GB18030.TXT with the data file gb-18030-2000.xml (retrieved from http://source.icu-project.org/repos/icu/data/trunk/charset/data/xml/ ) in which these subranges have been deleted from the simple lookup entries. Per bug #12845 from Arjen Nienhuis. The conversion code added here is based on his proposed patch, though I whacked it around rather heavily.
* doc: CREATE FOREIGN TABLE now allows CHECK ( ... ) NO INHERITRobert Haas2015-05-15
| | | | Etsuro Fujita
* TABLESAMPLE, SQL Standard and extensibleSimon Riggs2015-05-15
| | | | | | | | | | | | | | Add a TABLESAMPLE clause to SELECT statements that allows user to specify random BERNOULLI sampling or block level SYSTEM sampling. Implementation allows for extensible sampling functions to be written, using a standard API. Basic version follows SQLStandard exactly. Usable concrete use cases for the sampling API follow in later commits. Petr Jelinek Reviewed by Michael Paquier and Simon Riggs
* Silence another create_index regression test failure.Heikki Linnakangas2015-05-15
| | | | | | More platform differences in the less-significant digits in output. Per buildfarm member rover_firefly, still.
* Fix outdated src/test/mb/ tests, and add a GB18030 test.Tom Lane2015-05-15
| | | | | | | | The expected-output files for these tests were broken by the recent addition of a warning for hash indexes. Update them. Also add a test case for GB18030 encoding, similar to the other ones. This is a pretty weak test, but it's better than nothing.