aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
...
* pg_upgrade: allow upgrades for new-only TOAST tablesBruce Momjian2014-07-07
| | | | | | | | | | Previously, when calculations on the need for toast tables changed, pg_upgrade could not handle cases where the new cluster needed a TOAST table and the old cluster did not. (It already handled the opposite case.) This fixes the "OID mismatch" error typically generated in this case. Backpatch through 9.2
* Fix typos in comments.Fujii Masao2014-07-07
|
* Remove swpb-based spinlock implementation for ARMv5 and earlier.Robert Haas2014-07-06
| | | | | | | | | | | | | | Per recent analysis by Andres Freund, this implementation is in fact unsafe, because ARMv5 has weak memory ordering, which means tha the CPU could move loads or stores across the volatile store performed by the default S_UNLOCK. We could try to fix this, but have no ARMv5 hardware to test on, so removing support seems better. We can still support ARMv5 systems on GCC versions new enough to have built-in atomics support for this platform, and can also re-add support for the old way if someone has hardware that can be used to test a fix. However, since the requirement to use a relatively-new GCC hasn't been an issue for ARMv6 or ARMv7, which lack the swpb instruction altogether, perhaps it won't be an issue for ARMv5 either.
* Fix decoding of MULTI_INSERTs when rows other than the last are toasted.Andres Freund2014-07-06
| | | | | | | | | | | | | | | | | | | | | | | When decoding the results of a HEAP2_MULTI_INSERT (currently only generated by COPY FROM) toast columns for all but the last tuple weren't replaced by their actual contents before being handed to the output plugin. The reassembled toast datums where disregarded after every REORDER_BUFFER_CHANGE_(INSERT|UPDATE|DELETE) which is correct for plain inserts, updates, deletes, but not multi inserts - there we generate several REORDER_BUFFER_CHANGE_INSERTs for a single xl_heap_multi_insert record. To solve the problem add a clear_toast_afterwards boolean to ReorderBufferChange's union member that's used by modifications. All row changes but multi_inserts always set that to true, but multi_insert sets it only for the last change generated. Add a regression test covering decoding of multi_inserts - there was none at all before. Backpatch to 9.4 where logical decoding was introduced. Bug found by Petr Jelinek.
* Consistently pass an "unsigned char" to ctype.h functions.Noah Misch2014-07-06
| | | | | | The isxdigit() calls relied on undefined behavior. The isascii() call was well-defined, but our prevailing style is to include the cast. Back-patch to 9.4, where the isxdigit() calls were introduced.
* Remove dead typeStruct variable from plpy_spi.c.Kevin Grittner2014-07-05
| | | | Left behind by 8b6010b8350a1756cd85595705971df81b5ffc07.
* Fix double-free bug of WAL streaming buffer in pg_receivexlog.Fujii Masao2014-07-04
| | | | This bug was introduced while refactoring in commit 74cbe96.
* Refactor pg_receivexlog main loop code, for readability.Fujii Masao2014-07-04
| | | | | | | | Previously the source codes for receiving the data and for polling the socket were included in pg_receivexlog main loop. This commit splits out them as separate functions. This is useful for improving the readability of main loop code and making the future pg_receivexlog-related patch simpler.
* Split out the description of page-level lock as new subsection in document.Fujii Masao2014-07-04
| | | | Michael Banck
* Don't cache per-group context across the whole query in orderedsetaggs.c.Tom Lane2014-07-03
| | | | | | | | | | | Although nodeAgg.c currently uses the same per-group memory context for all groups of a query, that might change in future. Avoid assuming it. This costs us an extra AggCheckCallContext() call per group, but that's pretty cheap and is probably good from a safety standpoint anyway. Back-patch to 9.4 in case any third-party code copies this logic. Andrew Gierth
* Redesign API presented by nodeAgg.c for ordered-set and similar aggregates.Tom Lane2014-07-03
| | | | | | | | | | | | | The previous design exposed the input and output ExprContexts of the Agg plan node, but work on grouping sets has suggested that we'll regret doing that. Instead provide more narrowly-defined APIs that can be implemented in multiple ways, namely a way to get a short-term memory context and a way to register an aggregate shutdown callback. Back-patch to 9.4 where the bad APIs were introduced, since we don't want third-party code using these APIs and then having to change in 9.5. Andrew Gierth
* Improve support for composite types in PL/Python.Tom Lane2014-07-03
| | | | | | | | | | | | | | | Allow PL/Python functions to return arrays of composite types. Also, fix the restriction that plpy.prepare/plpy.execute couldn't handle query parameters or result columns of composite types. In passing, adopt a saner arrangement for where to release the tupledesc reference counts acquired via lookup_rowtype_tupdesc. The callers of PLyObject_ToCompositeDatum were doing the lookups, but then the releases happened somewhere down inside subroutines of PLyObject_ToCompositeDatum, which is bizarre and bug-prone. Instead release in the same function that acquires the refcount. Ed Behn and Ronan Dunklau, reviewed by Abhijit Menon-Sen
* Use a separate temporary directory for the Unix-domain socketPeter Eisentraut2014-07-02
| | | | | | | Creating the Unix-domain socket in the build directory can run into name-length limitations. Therefore, create the socket file in the default temporary directory of the operating system. Keep the temporary data directory etc. in the build tree.
* Support vpath builds in TAP testsPeter Eisentraut2014-07-02
|
* Smooth reporting of commit/rollback statistics.Kevin Grittner2014-07-02
| | | | | | | | | | | | | | | | | | | | If a connection committed or rolled back any transactions within a PGSTAT_STAT_INTERVAL pacing interval without accessing any tables, the reporting of those statistics would be held up until the connection closed or until it ended a PGSTAT_STAT_INTERVAL interval in which it had accessed a table. This could result in under- reporting of transactions for an extended period, followed by a spike in reported transactions. While this is arguably a bug, the impact is minimal, primarily affecting, and being affected by, monitoring software. It might cause more confusion than benefit to change the existing behavior in released stable branches, so apply only to master and the 9.4 beta. Gurjeet Singh, with review and editing by Kevin Grittner, incorporating suggested changes from Abhijit Menon-Sen and Tom Lane.
* pg_upgrade: preserve database and relation minmxid valuesBruce Momjian2014-07-02
| | | | | | | | | Also set these values for pre-9.3 old clusters that don't have values to preserve. Analysis by Alvaro Backpatch through 9.3
* Rename logical decoding's pg_llog directory to pg_logical.Andres Freund2014-07-02
| | | | | | | | | | | | | | | The old name wasn't very descriptive as of actual contents of the directory, which are historical snapshots in the snapshots/ subdirectory and mappingdata for rewritten tuples in mappings/. There's been a fair amount of discussion what would be a good name. I'm settling for pg_logical because it's likely that further data around logical decoding and replication will need saving in the future. Also add the missing entry for the directory into storage.sgml's list of PGDATA contents. Bumps catversion as the data directories won't be compatible.
* pg_upgrade: no need to remove "members" files for pre-9.3 upgradesBruce Momjian2014-07-02
| | | | | | Per analysis by Alvaro Backpatch through 9.3
* Add some errdetail to checkRuleResultList().Tom Lane2014-07-02
| | | | | | | | | | | | | | | | | This function wasn't originally thought to be really user-facing, because converting a table to a view isn't something we expect people to do manually. So not all that much effort was spent on the error messages; in particular, while the code will complain that you got the column types wrong it won't say exactly what they are. But since we repurposed the code to also check compatibility of rule RETURNING lists, it's definitely user-facing. It now seems worthwhile to add errdetail messages showing exactly what the conflict is when there's a mismatch of column names or types. This is prompted by bug #10836 from Matthias Raffelsieper, which might have been forestalled if the error message had reported the wrong column type as being "record". Back-patch to 9.4, but not into older branches where the set of translatable error strings is supposed to be stable.
* Prevent psql from issuing BEGIN before ALTER SYSTEM when AUTOCOMMIT is off.Fujii Masao2014-07-02
| | | | | | | | | | | | | The autocommit-off mode works by issuing an implicit BEGIN just before any command that is not already in a transaction block and is not itself a BEGIN or other transaction-control command, nor a command that cannot be executed inside a transaction block. This commit prevents psql from issuing such an implicit BEGIN before ALTER SYSTEM because it's not allowed inside a transaction block. Backpatch to 9.4 where ALTER SYSTEM was added. Report by Feike Steenbergen
* Allow CREATE/ALTER DATABASE to manipulate datistemplate and datallowconn.Tom Lane2014-07-01
| | | | | | | | | | | Historically these database properties could be manipulated only by manually updating pg_database, which is error-prone and only possible for superusers. But there seems no good reason not to allow database owners to set them for their databases, so invent CREATE/ALTER DATABASE options to do that. Adjust a couple of places that were doing it the hard way to use the commands instead. Vik Fearing, reviewed by Pavel Stehule
* Refactor CREATE/ALTER DATABASE syntax so options need not be keywords.Tom Lane2014-07-01
| | | | | | | | | | | | | Most of the existing option names are keywords anyway, but we can get rid of LC_COLLATE and LC_CTYPE as keywords known to the lexer/grammar. This immediately reduces the size of the grammar tables by about 8KB, and will save more when we add additional CREATE/ALTER DATABASE options in future. A side effect of the implementation is that the CONNECTION LIMIT option can now also be spelled CONNECTION_LIMIT. We choose not to document this, however. Vik Fearing, based on a suggestion by me; reviewed by Pavel Stehule
* Remove some useless code in the configure script.Tom Lane2014-07-01
| | | | | | | | | | | | Almost ten years ago, commit e48322a6d6cfce1ec52ab303441df329ddbc04d1 broke the logic in ACX_PTHREAD by looping through all the possible flags rather than stopping with the first one that would work. This meant that $acx_pthread_ok was no longer meaningful after the loop; it would usually be "no", whether or not we'd found working thread flags. The reason nobody noticed is that Postgres doesn't actually use any of the symbols set up by the code after the loop. Rather than complicate things some more to make it work as designed, let's just remove all that dead code, and thereby save a few cycles in each configure run.
* Improve handling of OOM score adjustment in sample Linux start script.Tom Lane2014-07-01
| | | | Per a suggestion from Christoph Berg.
* Fix inadequately-sized output buffer in contrib/unaccent.Tom Lane2014-07-01
| | | | | | | | | | | | | | | | | The output buffer size in unaccent_lexize() was calculated as input string length times pg_database_encoding_max_length(), which effectively assumes that replacement strings aren't more than one character. While that was all that we previously documented it to support, the code actually has always allowed replacement strings of arbitrary length; so if you tried to make use of longer strings, you were at risk of buffer overrun. To fix, use an expansible StringInfo buffer instead of trying to determine the maximum space needed a-priori. This would be a security issue if unaccent rules files could be installed by unprivileged users; but fortunately they can't, so in the back branches the problem can be labeled as improper configuration by a superuser. Nonetheless, a memory stomp isn't a nice way of reacting to improper configuration, so let's back-patch the fix.
* Avoid copying index tuples when building an index.Robert Haas2014-07-01
| | | | | | | | | | | | The previous code, perhaps out of concern for avoid memory leaks, formed the tuple in one memory context and then copied it to another memory context. However, this doesn't appear to be necessary, since index_form_tuple and the functions it calls take precautions against leaking memory. In my testing, building the tuple directly inside the sort context shaves several percent off the index build time. Rearrange things so we do that. Patch by me. Review by Amit Kapila, Tom Lane, Andres Freund.
* Issue a WARNING about invalid rule file format in contrib/unaccent.Tom Lane2014-06-30
| | | | | | | | | | | We were already issuing a WARNING, albeit only elog not ereport, for duplicate source strings; so warning rather than just being stoically silent seems like the best thing to do here. Arguably both of these complaints should be upgraded to ERRORs, but that might be more behavioral change than people want. Note: the faulty line is already printed via an errcontext hook, so there's no need for more information than these messages provide.
* Allow multi-character source strings in contrib/unaccent.Tom Lane2014-06-30
| | | | | | | | | This could be useful in languages where diacritic signs are represented as separate characters; more generally it supports using unaccent dictionaries for substring substitutions beyond narrowly conceived "diacritic removal". In any case, since the rule-file parser doesn't complain about multi-character source strings, it behooves us to do something unsurprising with them.
* Allow empty replacement strings in contrib/unaccent.Tom Lane2014-06-30
| | | | | | | | | | | This is useful in languages where diacritic signs are represented as separate characters; it's also one step towards letting unaccent be used for arbitrary substring substitutions. In passing, improve the user documentation for unaccent, which was sadly vague about some important details. Mohammad Alhashash, reviewed by Abhijit Menon-Sen
* pg_upgrade: update C comments about pg_dumpallBruce Momjian2014-06-30
| | | | | | | | There were some C comments that hadn't been updated from the switch of using only pg_dumpall to using pg_dump and pg_dumpall, so update them. Also, don't bother using --schema-only for pg_dumpall --globals-only. Backpatch through 9.4
* Don't prematurely free the BufferAccessStrategy in pgstat_heap().Noah Misch2014-06-30
| | | | | | | This function continued to use it after heap_endscan() freed it. In passing, don't explicit create a strategy here. Instead, use the one created by heap_beginscan_strat(), if any. Back-patch to 9.2, where use of a BufferAccessStrategy here was introduced.
* Fix typos in the cluster_name commit.Andres Freund2014-06-30
| | | | Thom Brown and Fujii Masao
* Check interrupts during logical decoding more frequently.Andres Freund2014-06-30
| | | | | | | | | | | | | | | When reading large amounts of preexisting WAL during logical decoding using the SQL interface we possibly could fail to check interrupts in due time. Similarly the same could happen on systems with a very high WAL volume while creating a new logical replication slot, independent of the used interface. Previously these checks where only performed in xlogreader's read_page callbacks, while waiting for new WAL to be produced. That's not sufficient though, if there's never a need to wait. Walsender's send loop already contains a interrupt check. Backpatch to 9.4 where the logical decoding feature was introduced.
* Fix and enhance the assertion of no palloc's in a critical section.Heikki Linnakangas2014-06-30
| | | | | | | | | | | | The assertion failed if WAL_DEBUG or LWLOCK_STATS was enabled; fix that by using separate memory contexts for the allocations made within those code blocks. This patch introduces a mechanism for marking any memory context as allowed in a critical section. Previously ErrorContext was exempt as a special case. Instead of a blanket exception of the checkpointer process, only exempt the memory context used for the pending ops hash table.
* Remove use_json_as_text options from json_to_record/json_populate_record.Tom Lane2014-06-29
| | | | | | | | | | | | | | | | The "false" case was really quite useless since all it did was to throw an error; a definition not helped in the least by making it the default. Instead let's just have the "true" case, which emits nested objects and arrays in JSON syntax. We might later want to provide the ability to emit sub-objects in Postgres record or array syntax, but we'd be best off to drive that off a check of the target field datatype, not a separate argument. For the functions newly added in 9.4, we can just remove the flag arguments outright. We can't do that for json_populate_record[set], which already existed in 9.3, but we can ignore the argument and always behave as if it were "true". It helps that the flag arguments were optional and not documented in any useful fashion anyway.
* Add cluster_name GUC which is included in process titles if set.Andres Freund2014-06-29
| | | | | | | | | | | | | | | | | | When running several postgres clusters on one OS instance it's often inconveniently hard to identify which "postgres" process belongs to which postgres instance. Add the cluster_name GUC, whose value will be included as part of the process titles if set. With that processes can more easily identified using tools like 'ps'. To avoid problems with encoding mismatches between postgresql.conf, consoles, and individual databases replace non-ASCII chars in the name with question marks. The length is limited to NAMEDATALEN to make it less likely to truncate important information at the end of the status. Thomas Munro, with some adjustments by me and review by a host of people.
* Remove Alpha and Tru64 support.Andres Freund2014-06-28
| | | | | | | | | | | | | Support for running postgres on Alpha hasn't been tested for a long while. Due to Alpha's uniquely lax cache coherency model it's a hard to develop for platform (especially blindly!) and thought to be unlikely to currently work correctly. As Alpha is the only supported architecture for Tru64 drop support for it as well. Tru64's support has ended 2012 and it has been in maintenance-only mode for much longer. Also remove stray references to __ksr__ and ultrix defines.
* Allow pushdown of WHERE quals into subqueries with window functions.Tom Lane2014-06-27
| | | | | | | | | | | | | | | | | | | | | | | | We can allow this even without any specific knowledge of the semantics of the window function, so long as pushed-down quals will either accept every row in a given window partition, or reject every such row. Because window functions act only within a partition, such a case can't result in changing the window functions' outputs for any surviving row. Eliminating entire partitions in this way obviously can reduce the cost of the window-function computations substantially. The fly in the ointment is that it's hard to be entirely sure whether this is true for an arbitrary qual condition. This patch allows pushdown if (a) the qual references only partitioning columns, and (b) the qual contains no volatile functions. We are at risk of incorrect results if the qual can produce different answers for values that the partitioning equality operator sees as equal. While it's not hard to invent cases for which that can happen, it seems to seldom be a problem in practice, since no one has complained about a similar assumption that we've had for many years with respect to DISTINCT. The potential performance gains seem to be worth the risk. David Rowley, reviewed by Vik Fearing; some credit is due also to Thomas Mayer who did considerable preliminary investigation.
* Have multixact be truncated by checkpoint, not vacuumAlvaro Herrera2014-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of truncating pg_multixact at vacuum time, do it only at checkpoint time. The reason for doing it this way is twofold: first, we want it to delete only segments that we're certain will not be required if there's a crash immediately after the removal; and second, we want to do it relatively often so that older files are not left behind if there's an untimely crash. Per my proposal in http://www.postgresql.org/message-id/20140626044519.GJ7340@eldon.alvh.no-ip.org we now execute the truncation in the checkpointer process rather than as part of vacuum. Vacuum is in only charge of maintaining in shared memory the value to which it's possible to truncate the files; that value is stored as part of checkpoints also, and so upon recovery we can reuse the same value to re-execute truncate and reset the oldest-value-still-safe-to-use to one known to remain after truncation. Per bug reported by Jeff Janes in the course of his tests involving bug #8673. While at it, update some comments that hadn't been updated since multixacts were changed. Backpatch to 9.3, where persistency of pg_multixact files was introduced by commit 0ac5ad5134f2.
* Don't allow relminmxid to go backwards during VACUUM FULLAlvaro Herrera2014-06-27
| | | | | | | | | | | | | We were allowing a table's pg_class.relminmxid value to move backwards when heaps were swapped by VACUUM FULL or CLUSTER. There is a similar protection against relfrozenxid going backwards, which we neglected to clone when the multixact stuff was rejiggered by commit 0ac5ad5134f276. Backpatch to 9.3, where relminmxid was introduced. As reported by Heikki in http://www.postgresql.org/message-id/52401AEA.9000608@vmware.com
* Fix broken Assert() introduced by 8e9a16ab8f7f0e58Alvaro Herrera2014-06-27
| | | | | | | | | | | | | | | | Don't assert MultiXactIdIsRunning if the multi came from a tuple that had been share-locked and later copied over to the new cluster by pg_upgrade. Doing that causes an error to be raised unnecessarily: MultiXactIdIsRunning is not open to the possibility that its argument came from a pg_upgraded tuple, and all its other callers are already checking; but such multis cannot, obviously, have transactions still running, so the assert is pointless. Noticed while investigating the bogus pg_multixact/offsets/0000 file left over by pg_upgrade, as reported by Andres Freund in http://www.postgresql.org/message-id/20140530121631.GE25431@alap3.anarazel.de Backpatch to 9.3, as the commit that introduced the buglet.
* Disallow pushing volatile qual expressions down into DISTINCT subqueries.Tom Lane2014-06-27
| | | | | | | | | | | | | | | | A WHERE clause applied to the output of a subquery with DISTINCT should theoretically be applied only once per distinct row; but if we push it into the subquery then it will be evaluated at each row before duplicate elimination occurs. If the qual is volatile this can give rise to observably wrong results, so don't do that. While at it, refactor a little bit to allow subquery_is_pushdown_safe to report more than one kind of restrictive condition without indefinitely expanding its argument list. Although this is a bug fix, it seems unwise to back-patch it into released branches, since it might de-optimize plans for queries that aren't giving any trouble in practice. So apply to 9.4 but not further back.
* Get rid of bogus separate pg_proc entries for json_extract_path operators.Tom Lane2014-06-26
| | | | | | | | | | | | These should not have existed to begin with, but there was apparently some misunderstanding of the purpose of the opr_sanity regression test item that checks for operator implementation functions with their own comments. The idea there is to check for unintentional violations of the rule that operator implementation functions shouldn't be documented separately .... but for these functions, that is in fact what we want, since the variadic option is useful and not accessible via the operator syntax. Get rid of the extra pg_proc entries and fix the regression test and documentation to be explicit about what we're doing here.
* Forward-patch regression test for "could not find pathkey item to sort".Tom Lane2014-06-26
| | | | | | | Commit a87c729153e372f3731689a7be007bc2b53f1410 already fixed the bug this is checking for, but the regression test case it added didn't cover this scenario. Since we managed to miss the fact that there was a bug at all, it seems like a good idea to propagate the extra test case forward to HEAD.
* Remove obsolete example of CSV log file name from log_filename document.Fujii Masao2014-06-26
| | | | | | | | | | | | | | 7380b63 changed log_filename so that epoch was not appended to it when no format specifier is given. But the example of CSV log file name with epoch still left in log_filename document. This commit removes such obsolete example. This commit also documents the defaults of log_directory and log_filename. Backpatch to all supported versions. Christoph Berg
* Rationalize error messages within jsonfuncs.c.Tom Lane2014-06-25
| | | | | | | | | | | | | | | | I noticed that the functions in jsonfuncs.c sometimes printed error messages that claimed I'd called some other function. Investigation showed that this was from repurposing code into "worker" functions without taking much care as to whether it would mention the right SQL-level function if it threw an error. Moreover, there was a weird mismash of messages that contained a fixed function name, messages that used %s for a function name, and messages that constructed a function name out of spare parts, like "json%s_populate_record" (which, quite aside from being ugly as sin, wasn't even sufficient to cover all the cases). This would put an undue burden on our long-suffering translators. Standardize on inserting the SQL function name with %s so as to reduce the number of translatable strings, and pass function names around as needed to make sure we can report the right one. Fix up some gratuitous variations in wording, too.
* Cosmetic improvements in jsonfuncs.c.Tom Lane2014-06-25
| | | | | | Re-pgindent, remove a lot of random vertical whitespace, remove useless (if not counterproductive) inline markings, get rid of unnecessary zero-padding of strings for hashtable searches. No functional changes.
* Fix handling of nested JSON objects in json_populate_recordset and friends.Tom Lane2014-06-24
| | | | | | | | | | | | | | | | | | | | | populate_recordset_object_start() improperly created a new hash table (overwriting the link to the existing one) if called at nest levels greater than one. This resulted in previous fields not appearing in the final output, as reported by Matti Hameister in bug #10728. In 9.4 the problem also affects json_to_recordset. This perhaps missed detection earlier because the default behavior is to throw an error for nested objects: you have to pass use_json_as_text = true to see the problem. In addition, fix query-lifespan leakage of the hashtable created by json_populate_record(). This is pretty much the same problem recently fixed in dblink: creating an intended-to-be-temporary context underneath the executor's per-tuple context isn't enough to make it go away at the end of the tuple cycle, because MemoryContextReset is not MemoryContextResetAndDeleteChildren. Michael Paquier and Tom Lane
* pg_upgrade: remove pg_multixact files left by initdbBruce Momjian2014-06-24
| | | | | | | | This fixes a bug that caused vacuum to fail when the '0000' files left by initdb were accessed as part of vacuum's cleanup of old pg_multixact files. Backpatch through 9.3
* Don't allow foreign tables with OIDs.Heikki Linnakangas2014-06-24
| | | | | | | | | | | | | | The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it was still possible with default_with_oids=true. But the rest of the system, including pg_dump, isn't prepared to handle foreign tables with OIDs properly. Backpatch down to 9.1, where foreign tables were introduced. It's possible that there are databases out there that already have foreign tables with OIDs. There isn't much we can do about that, but at least we can prevent them from being created in the future. Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.