aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Simplify addJsonbToParseState()Andrew Dunstan2015-05-26
| | | | | This function no longer needs to walk non-scalar structures passed to it, following commit 54547bd87f49326d67051254c363e6597d16ffda.
* Add all structured objects passed to pushJsonbValue piecewise.Andrew Dunstan2015-05-26
| | | | | | | | | | Commit 9b74f32cdbff8b9be47fc69164eae552050509ff did this for objects of type jbvBinary, but in trying further to simplify some of the new jsonb code I discovered that objects of type jbvObject or jbvArray passed as WJB_ELEM or WJB_VALUE also caused problems. These too are now added component by component. Backpatch to 9.4.
* Fix valgrind's "unaddressable bytes" whining about BRIN code.Tom Lane2015-05-25
| | | | | | | | | | | | | | | | | | | | | | brin_form_tuple calculated an exact tuple size, then palloc'd and filled just that much. Later, brin_doinsert or brin_doupdate would MAXALIGN the tuple size and tell PageAddItem that that was the size of the tuple to insert. If the original tuple size wasn't a multiple of MAXALIGN, the net result would be that PageAddItem would memcpy a few more bytes than the palloc request had been for. AFAICS, this is totally harmless in the real world: the error is a read overrun not a write overrun, and palloc would certainly have rounded the request up to a MAXALIGN multiple internally, so there's no chance of the memcpy fetching off the end of memory. Valgrind, however, is picky to the byte level not the MAXALIGN level. Fix it by pushing the MAXALIGN step back to brin_form_tuple. (The other possible source of tuples in this code, brin_form_placeholder_tuple, was already producing a MAXALIGN'd result.) In passing, be a bit more paranoid about internal allocations in brin_form_tuple.
* Update README.tuplockAlvaro Herrera2015-05-25
| | | | | | | | | | | Multixact truncation is now handled differently, and this file hadn't gotten the memo. Per note from Amit Langote. I didn't use his patch, though. Also update the description of infomask bits, which weren't completely up to date either. This commit also propagates b01a4f6838 back to 9.3 and 9.4, which apparently I failed to do back then.
* Clean up and simplify jsonb_concat code.Andrew Dunstan2015-05-25
| | | | | | | Some of this is made possible by commit 9b74f32cdbff8b9be47fc69164eae552050509ff which lets pushJsonbValue handle binary Jsonb values, meaning that clients no longer have to, and some is just doing things in simpler and more straightforward ways.
* Fix rescan of IndexScan node with the new lossy GiST distance functions.Heikki Linnakangas2015-05-25
| | | | | | Must reset the "reached end" flag and reorder queue at rescan. Per report from Regina Obe, bug #13349
* Manual cleanup of pgindent results.Tom Lane2015-05-24
| | | | | | Fix some places where pgindent did silly stuff, often because project style wasn't followed to begin with. (I've not touched the atomics headers, though.)
* Rename pg_shdepend.c's typedef "objectType" to SharedDependencyObjectType.Tom Lane2015-05-24
| | | | | | | | | | | | The name objectType is widely used as a field name, and it's pure luck that this conflict has not caused pgindent to go crazy before. It messed up pg_audit.c pretty good though. Since pg_shdepend.c doesn't export this typedef and only uses it in three places, changing that seems saner than changing the field usages. Back-patch because we're contemplating using the union of all branch typedefs for future pgindent runs, so this won't fix anything if it stays the same in back branches.
* Remove no-longer-required function declarations.Tom Lane2015-05-24
| | | | | | | | | | Remove a bunch of "extern Datum foo(PG_FUNCTION_ARGS);" declarations that are no longer needed now that PG_FUNCTION_INFO_V1(foo) provides that. Some of these were evidently missed in commit e7128e8dbb305059, but others were cargo-culted in in code added since then. Possibly that can be blamed in part on the fact that we'd not fixed relevant documentation examples, which I've now done.
* pgindent run for 9.5Bruce Momjian2015-05-23
|
* Add error check for lossy distance functions in index-only scans.Tom Lane2015-05-23
| | | | | Maybe we should actually support this, but for the moment let's just throw an error if the opclass tries it.
* Fix incorrect snprintf() limit.Tom Lane2015-05-23
| | | | | | | | Typo in commit 7cbee7c0a. No practical effect since the buffer should never actually be overrun, but various compilers and static analyzers will whine about it. Petr Jelinek
* Still more fixes for lossy-GiST-distance-functions patch.Tom Lane2015-05-23
| | | | | | Fix confusion in documentation, substantial memory leakage if float8 or float4 are pass-by-reference, and assorted comments that were obsoleted by commit 98edd617f3b62a02cb2df9b418fcc4ece45c7ec0.
* Fix yet another bug in ON CONFLICT rule deparsing.Andres Freund2015-05-23
| | | | | | | Expand testing of rule deparsing a good bit, it's evidently needed. Author: Peter Geoghegan, Andres Freund Discussion: CAM3SWZQmXxZhQC32QVEOTYfNXJBJ_Q2SDENL7BV14Cq-zL0FLg@mail.gmail.com
* Remove the new UPSERT command tag and use INSERT instead.Andres Freund2015-05-23
| | | | | | | | | | | | | | | | | Previously, INSERT with ON CONFLICT DO UPDATE specified used a new command tag -- UPSERT. It was introduced out of concern that INSERT as a command tag would be a misrepresentation for ON CONFLICT DO UPDATE, as some affected rows may actually have been updated. Alvaro Herrera noticed that the implementation of that new command tag was incomplete; in subsequent discussion we concluded that having it doesn't provide benefits that are in line with the compatibility breaks it requires. Catversion bump due to the removal of PlannedStmt->isUpsert. Author: Peter Geoghegan Discussion: 20150520215816.GI5885@postgresql.org
* Fix recently-introduced crash in array_contain_compare().Tom Lane2015-05-22
| | | | | | | | | | Silly oversight in commit 1dc5ebc9077ab742079ce5dac9a6664248d42916: when array2 is an expanded array, it might have array2->xpn.dnulls equal to NULL, indicating the array is known null-free. The code wasn't expecting that, because it formerly always used deconstruct_array() which always delivers a nulls array. Per bug #13334 from Regina Obe.
* Unpack jbvBinary objects passed to pushJsonbValueAndrew Dunstan2015-05-22
| | | | | | | | | | | | | | | | | | | pushJsonbValue was accepting jbvBinary objects passed as WJB_ELEM or WJB_VALUE data. While this succeeded, when those objects were later encountered in attempting to convert the result to Jsonb, errors occurred. With this change we ghuarantee that a JSonbValue constructed from calls to pushJsonbValue does not contain any jbvBinary objects. This cures a problem observed with jsonb_delete. This means callers of pushJsonbValue no longer need to perform this unpacking themselves. A subsequent patch will perform some cleanup in that area. The error was not triggered by any 9.4 code, but this is a publicly visible routine, and so the error could be exercised by third party code, therefore backpatch to 9.4. Bug report from Peter Geoghegan, fix by me.
* At promotion, don't leave behind a partial segment on the old timeline.Heikki Linnakangas2015-05-22
| | | | | | | | | | | | | | | | | | With commit de768844, a copy of the partial segment was archived with the .partial suffix, but the original file was still left in pg_xlog, so it didn't actually solve the problems with archiving the partial segment that it was supposed to solve. With this patch, the partial segment is renamed rather than copied, so we only archive it with the .partial suffix. Also be more robust in detecting if the last segment is already being archived. Previously I used XLogArchiveIsBusy() for that, but that's not quite right. With archive_mode='always', there might be a .ready file for it, and we don't want to rename it to .partial in that case. The old segment is needed until we're fully committed to the new timeline, i.e. until we've written the end-of-recovery WAL record and updated the min recovery point and timeline in the control file. So move the renaming later in the startup sequence, after all that's been done.
* More fixes for lossy-GiST-distance-functions patch.Tom Lane2015-05-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Paul Ramsey reported that commit 35fcb1b3d038a501f3f4c87c05630095abaaadab induced a core dump on commuted ORDER BY expressions, because it was assuming that the indexorderby expression could be found verbatim in the relevant equivalence class, but it wasn't there. We really don't need anything that complicated anyway; for the data types likely to be used for index ORDER BY operators in the foreseeable future, the exprType() of the ORDER BY expression will serve fine. (The case where we'd have to work harder is where the ORDER BY expression's result is only binary-compatible with the declared input type of the ordering operator; long before worrying about that, one would need to get rid of GiST's hard-wired assumption that said datatype is float8.) Aside from fixing that crash and adding a regression test for the case, I did some desultory code review: nodeIndexscan.c was likewise overthinking how hard it ought to work to identify the datatype of the ORDER BY expressions. Add comments explaining how come nodeIndexscan.c can get away with simplifying assumptions about NULLS LAST ordering and no backward scan. Revert no-longer-needed changes of find_ec_member_for_tle(); while the new definition was no worse than the old, it wasn't better either, and it might cause back-patching pain. Revert entirely bogus additions to genam.h.
* Improve packing/alignment annotation for ItemPointerData.Tom Lane2015-05-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We want this struct to be exactly a series of 3 int16 words, no more and no less. Historically, at least, some ARM compilers preferred to pad it to 8 bytes unless coerced. Our old way of doing that was just to use __attribute__((packed)), but as pointed out by Piotr Stefaniak, that does too much: it also licenses the compiler to give the struct only byte-alignment. We don't want that because it adds access overhead, possibly quite significant overhead. According to the GCC manual, what we want requires also specifying __attribute__((align(2))). It's not entirely clear if all the relevant compilers accept this pragma as well, but we can hope the buildfarm will tell us if not. We can also add a static assertion that should fire if the compiler padded the struct. Since the combination of these pragmas should define exactly what we want on any compiler that accepts them, let's try using them wherever we think they exist, not only for __arm__. (This is likely to expose that the conditional definitions in c.h are inadequate, but finding that out would be a good thing.) The immediate motivation for this is that the current definition of ExecRowMark allows its curCtid field to be misaligned. It is not clear whether there are any other uses of ItemPointerData with a similar hazard. We could change the definition of ExecRowMark if this doesn't work, but it would be far better to have a future-proof fix. Piotr Stefaniak, some further hacking by me
* Make recovery_target_action = pause work.Fujii Masao2015-05-21
| | | | | | | | | | | | | | | Previously even if recovery_target_action was set to pause and the recovery target was reached, the recovery could never be paused. Because the setting of pause was *always* overridden with that of shutdown unexpectedly. This override is valid and intentional if hot_standby is not enabled because there is no way to resume the paused recovery in this case and the setting of pause is completely useless. But not if hot_standby is enabled. This patch changes the code so that the setting of pause is overridden with that of shutdown only when hot_standby is not enabled. Bug reported by Andres Freund
* Another typo fix.Tom Lane2015-05-20
| | | | In the spirit of the season.
* Fix more typos in comments.Heikki Linnakangas2015-05-20
| | | | Patch by CharSyam, plus a few more I spotted with grep.
* Collection of typo fixes.Heikki Linnakangas2015-05-20
| | | | | | | | | | | | | | | Use "a" and "an" correctly, mostly in comments. Two error messages were also fixed (they were just elogs, so no translation work required). Two function comments in pg_proc.h were also fixed. Etsuro Fujita reported one of these, but I found a lot more with grep. Also fix a few other typos spotted while grepping for the a/an typos. For example, "consists out of ..." -> "consists of ...". Plus a "though"/ "through" mixup reported by Euler Taveira. Many of these typos were in old code, which would be nice to backpatch to make future backpatching easier. But much of the code was new, and I didn't feel like crafting separate patches for each branch. So no backpatching.
* Fix spelling in commentSimon Riggs2015-05-19
|
* Various fixes around ON CONFLICT for rule deparsing.Andres Freund2015-05-19
| | | | | | | | | | Neither the deparsing of the new alias for INSERT's target table, nor of the inference clause was supported. Also fixup a typo in an error message. Add regression tests to test those code paths. Author: Peter Geoghegan
* Refactor ON CONFLICT index inference parse tree representation.Andres Freund2015-05-19
| | | | | | | | | | | | | | Defer lookup of opfamily and input type of a of a user specified opclass until the optimizer selects among available unique indexes; and store the opclass in the parse analyzed tree instead. The primary reason for doing this is that for rule deparsing it's easier to use the opclass than the previous representation. While at it also rename a variable in the inference code to better fit it's purpose. This is separate from the actual fixes for deparsing to make review easier.
* Fix off-by-one error in Assertion.Heikki Linnakangas2015-05-19
| | | | | | | | | | | | The point of the assertion is to ensure that the arrays allocated in stack are large enough, but the check was one item short. This won't matter in practice because MaxIndexTuplesPerPage is an overestimate, so you can't have that many items on a page in reality. But let's be tidy. Spotted by Anastasia Lubennikova. Backpatch to all supported versions, like the patch that added the assertion.
* Revert "Change pg_seclabel.provider and pg_shseclabel.provider to type "name"."Tom Lane2015-05-19
| | | | | This reverts commit b82a7be603f1811a0a707b53c62de6d5d9431740. There is a better (less invasive) way to fix it, which I will commit next.
* Fix parse tree of DROP TRANSFORM and COMMENT ON TRANSFORMPeter Eisentraut2015-05-18
| | | | | | | | | | The plain C string language name needs to be wrapped in makeString() so that the parse tree is copyable. This is detectable by -DCOPY_PARSE_PLAN_TREES. Add a test case for the COMMENT case. Also make the quoting in the error messages more consistent. discovered by Tom Lane
* Change pg_seclabel.provider and pg_shseclabel.provider to type "name".Tom Lane2015-05-18
| | | | | | | | | | | | | | | | | | | These were "text", but that's a bad idea because it has collation-dependent ordering. No index in template0 should have collation-dependent ordering, especially not indexes on shared catalogs. There was general agreement that provider names don't need to be longer than other identifiers, so we can fix this at a small waste of table space by changing from text to name. There's no way to fix the problem in the back branches, but we can hope that security labels don't yet have widespread-enough usage to make it urgent to fix. There needs to be a regression sanity test to prevent us from making this same mistake again; but before putting that in, we'll need to get rid of similar brain fade in the recently-added pg_replication_origin catalog. Note: for lack of a suitable testing environment, I've not really exercised this change. I trust the buildfarm will show up any mistakes.
* Attach ON CONFLICT SET ... WHERE to the correct planstate.Andres Freund2015-05-19
| | | | | | | | | | The previous coding was a leftover from attempting to hang all the on conflict logic onto modify table's child nodes. It appears to not have actually caused problems except for explain. Add test exercising the broken and some other code paths. Author: Peter Geoghegan and Andres Freund
* Put back a backwards-compatible version of sampling support functions.Tom Lane2015-05-18
| | | | | | | | | Commit 83e176ec18d2a91dbea1d0d1bd94c38dc47cd77c removed the longstanding support functions for block sampling without any consideration of the impact this would have on third-party FDWs. The new API is not notably more functional for FDWs than the old, so forcing them to change doesn't seem like a good thing. We can provide the old API as a wrapper (more or less) around the new one for a minimal amount of extra code.
* Fix error message in pre_sync_fname.Robert Haas2015-05-18
| | | | | | | The old one didn't include %m anywhere, and required extra translation. Report by Peter Eisentraut. Fix by me. Review by Tom Lane.
* Check return values of sensitive system library calls.Noah Misch2015-05-18
| | | | | | | | | | | | | | | | | | | | | | PostgreSQL already checked the vast majority of these, missing this handful that nearly cannot fail. If putenv() failed with ENOMEM in pg_GSS_recvauth(), authentication would proceed with the wrong keytab file. If strftime() returned zero in cache_locale_time(), using the unspecified buffer contents could lead to information exposure or a crash. Back-patch to 9.0 (all supported versions). Other unchecked calls to these functions, especially those in frontend code, pose negligible security concern. This patch does not address them. Nonetheless, it is always better to check return values whose specification provides for indicating an error. In passing, fix an off-by-one error in strftime_win32()'s invocation of WideCharToMultiByte(). Upon retrieving a value of exactly MAX_L10N_DATA bytes, strftime_win32() would overrun the caller's buffer by one byte. MAX_L10N_DATA is chosen to exceed the length of every possible value, so the vulnerable scenario probably does not arise. Security: CVE-2015-3166
* Prevent a double free by not reentering be_tls_close().Noah Misch2015-05-18
| | | | | | | | | | | | Reentering this function with the right timing caused a double free, typically crashing the backend. By synchronizing a disconnection with the authentication timeout, an unauthenticated attacker could achieve this somewhat consistently. Call be_tls_close() solely from within proc_exit_prepare(). Back-patch to 9.0 (all supported versions). Benkocs Norbert Attila Security: CVE-2015-3165
* Fix typo in comment.Heikki Linnakangas2015-05-18
| | | | Jim Nasby
* Put back stats-collector restarting code, removed accidentally.Heikki Linnakangas2015-05-18
| | | | | | | | Removed that code snippet accidentally in the archive_mode='always' patch. Also, use varname-tags for archive_command in the docs. Fujii Masao
* Fix failure to copy IndexScan.indexorderbyops in copyfuncs.c.Tom Lane2015-05-17
| | | | | | | | | | | | | | | This oversight results in a crash at executor startup if the plan has been copied. outfuncs.c was missed as well. While we could probably have taught both those files to cope with the originally chosen representation of an Oid array, it would have been painful, not least because there'd be no easy way to verify the array length. An Oid List is far easier to work with. And AFAICS, there is no particular notational benefit to using an array rather than a list in the existing parts of the patch either. So just change it to a list. Error in commit 35fcb1b3d038a501f3f4c87c05630095abaaadab, which is new, so no need for back-patch.
* Fix typos in commentsMagnus Hagander2015-05-17
| | | | Dmitriy Olshevskiy
* Fix whitespacePeter Eisentraut2015-05-16
|
* More portability fixing for bipartite_match.c.Tom Lane2015-05-16
| | | | <float.h> is required for isinf() on some platforms. Per buildfarm.
* Avoid direct use of INFINITY.Tom Lane2015-05-15
| | | | It's not very portable. Per buildfarm.
* Support GROUPING SETS, CUBE and ROLLUP.Andres Freund2015-05-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
* Add BRIN infrastructure for "inclusion" opclassesAlvaro Herrera2015-05-15
| | | | | | | | | | | | | | | | This lets BRIN be used with R-Tree-like indexing strategies. Also provided are operator classes for range types, box and inet/cidr. The infrastructure provided here should be sufficient to create operator classes for similar datatypes; for instance, opclasses for PostGIS geometries should be doable, though we didn't try to implement one. (A box/point opclass was also submitted, but we ripped it out before commit because the handling of floating point comparisons in existing code is inconsistent and would generate corrupt indexes.) Author: Emre Hasegeli. Cosmetic changes by me Review: Andreas Karlsson
* Move strategy numbers to include/access/stratnum.hAlvaro Herrera2015-05-15
| | | | | | | | | | | | | | | | | | | | For upcoming BRIN opclasses, it's convenient to have strategy numbers defined in a single place. Since there's nothing appropriate, create it. The StrategyNumber typedef now lives there, as well as existing strategy numbers for B-trees (from skey.h) and R-tree-and-friends (from gist.h). skey.h is forced to include stratnum.h because of the StrategyNumber typedef, but gist.h is not; extensions that currently rely on gist.h for rtree strategy numbers might need to add a new A few .c files can stop including skey.h and/or gist.h, which is a nice side benefit. Per discussion: https://www.postgresql.org/message-id/20150514232132.GZ2523@alvh.no-ip.org Authored by Emre Hasegeli and Álvaro. (It's not clear to me why bootscanner.l has any #include lines at all.)
* SQLStandard feature T613 Sampling now SupportedSimon Riggs2015-05-15
|
* Fix uninitialized variable.Tom Lane2015-05-15
| | | | Per compiler warnings.
* Extend GB18030 encoding conversion to cover full Unicode range.Tom Lane2015-05-15
| | | | | | | | | | | | | | | | | | | | | | | | Our previous code for GB18030 <-> UTF8 conversion only covered Unicode code points up to U+FFFF, but the actual spec defines conversions for all code points up to U+10FFFF. That would be rather impractical as a lookup table, but fortunately there is a simple algorithmic conversion between the additional code points and the equivalent GB18030 byte patterns. Make use of the just-added callback facility in LocalToUtf/UtfToLocal to perform the additional conversions. Having created the infrastructure to do that, we can use the same code to map certain linearly-related subranges of the Unicode space below U+FFFF, allowing removal of the corresponding lookup table entries. This more than halves the lookup table size, which is a substantial savings; utf8_and_gb18030.so drops from nearly a megabyte to about half that. In support of doing that, replace ISO10646-GB18030.TXT with the data file gb-18030-2000.xml (retrieved from http://source.icu-project.org/repos/icu/data/trunk/charset/data/xml/ ) in which these subranges have been deleted from the simple lookup entries. Per bug #12845 from Arjen Nienhuis. The conversion code added here is based on his proposed patch, though I whacked it around rather heavily.
* TABLESAMPLE, SQL Standard and extensibleSimon Riggs2015-05-15
| | | | | | | | | | | | | | Add a TABLESAMPLE clause to SELECT statements that allows user to specify random BERNOULLI sampling or block level SYSTEM sampling. Implementation allows for extensible sampling functions to be written, using a standard API. Basic version follows SQLStandard exactly. Usable concrete use cases for the sampling API follow in later commits. Petr Jelinek Reviewed by Michael Paquier and Simon Riggs