aboutsummaryrefslogtreecommitdiff
path: root/src/bin
Commit message (Collapse)AuthorAge
* psql: Add missing file to nls.mkPeter Eisentraut2016-06-07
| | | | | | crosstabview.c was not added to nls.mk when it was added. Also remove redundant gettext markers, since psql_error() is already registered as a gettext keyword.
* pgbench: Fix order in --help outputPeter Eisentraut2016-06-07
| | | | | The new option --progress-timestamp was just added at the end. Put it in alphabetical order.
* pg_dump only selected components of ACCESS METHODsStephen Frost2016-06-07
| | | | | | | | | | | | | | | | | | dumpAccessMethod() didn't get the memo that we now have a bitfield for the components which should be dumped instead of a simple boolean. Correct that by checking if the relevant bit is set for each component being dumped out (and not dumping it out if it isn't set). This corrects an issue where CREATE ACCESS METHOD commands were being included in non-binary-upgrades when an extension included an access method (as the bloom extensions does). Also add a regression test to make sure that we only dump out the ACCESS METHOD commands, when they are part of an extension, when doing a binary upgrade. Pointed out by Thom Brown.
* pg_upgrade: Don't overwrite existing files.Robert Haas2016-06-06
| | | | | | | | | | | | | | | | | | For historical reasons, copyFile and rewriteVisibilityMap took a force argument which was always passed as true, meaning that any existing file should be overwritten. However, it seems much safer to instead fail if a file we need to write already exists. While we're at it, remove the "force" argument altogether, since it was never passed as anything other than true (and now we would never pass it as anything other than false, if we kept it). Noted by Andres Freund during post-commit review of the patch that added rewriteVisibilityMap, commit 7087166a88fe0c04fc6636d0d6d6bea1737fc1fb, but this also changes the behavior when copying files without rewriting them. Patch by Masahiko Sawada.
* pg_upgrade: Improve error checking in rewriteVisibilityMap.Robert Haas2016-06-06
| | | | | | | | | In the old logic, if read() were to return an error, we'd silently stop rewriting the visibility map at that point in the file. That's safe, but reporting the error is better, so do that instead. Report by Andres Freund. Patch by Masahiko Sawada, with one correction by me.
* Suppress -Wunused-result warnings about write(), again.Tom Lane2016-06-03
| | | | | | | | | Adopt the same solution as in commit aa90e148ca70a235, but this time let's put the ugliness inside the write_stderr() macro, instead of expecting each call site to deal with it. Back-port that decision into psql/common.c where I got the macro from in the first place. Per gripe from Peter Eisentraut.
* Fix various common mispellings.Greg Stark2016-06-03
| | | | | | | | | | Mostly these are just comments but there are a few in documentation and a handful in code and tests. Hopefully this doesn't cause too much unnecessary pain for backpatching. I relented from some of the most common like "thru" for that reason. The rest don't seem numerous enough to cause problems. Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings
* Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore.Tom Lane2016-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar signals and set a flag that was tested in various data-transfer loops. This was prone to errors of omission (cf commit 3c8aa6654); and even if the client-side response was prompt, we did nothing that would cause long-running SQL commands (e.g. CREATE INDEX) to terminate early. Also, the master process would effectively do nothing at all upon receipt of SIGINT; the only reason it seemed to work was that in typical scenarios the signal would also be delivered to the child processes. We should support termination when a signal is delivered only to the master process, though. Windows builds had no console interrupt handler, so they would just fall over immediately at control-C, again leaving long-running SQL commands to finish unmolested. To fix, remove the flag-checking approach altogether. Instead, allow the Unix signal handler to send a cancel request directly and then exit(1). In the master process, also have it forward the signal to the children. On Windows, add a console interrupt handler that behaves approximately the same. The main difference is that a single execution of the Windows handler can send all the cancel requests since all the info is available in one process, whereas on Unix each process sends a cancel only for its own database connection. In passing, fix an old problem that DisconnectDatabase tends to send a cancel request before exiting a parallel worker, even if nothing went wrong. This is at least a waste of cycles, and could lead to unexpected log messages, or maybe even data loss if it happened in pg_restore (though in the current code the problem seems to affect only pg_dump). The cause was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY state, causing PQtransactionStatus() to report PQTRANS_ACTIVE. That's normally harmless because the next PQexec() will silently clear the PGASYNC_BUSY state; but in a parallel worker we might exit without any additional SQL commands after a COPY step. So add an extra PQgetResult() call after a COPY to allow libpq to return to PGASYNC_IDLE state. This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore were introduced. Thanks to Kyotaro Horiguchi for Windows testing and code suggestions. Original-Patch: <7005.1464657274@sss.pgh.pa.us> Discussion: <20160602.174941.256342236.horiguchi.kyotaro@lab.ntt.co.jp>
* Clean up some minor inefficiencies in parallel dump/restore.Tom Lane2016-06-01
| | | | | | | | | | | | | | | | | | | Parallel dump did a totally pointless query to find out the name of each table to be dumped, which it already knows. Parallel restore runs issued lots of redundant SET commands because _doSetFixedOutputState() was invoked once per TOC item rather than just once at connection start. While the extra queries are insignificant if you're dumping or restoring large tables, it still seems worth getting rid of them. Also, give the responsibility for selecting the right client_encoding for a parallel dump worker to setup_connection() where it naturally belongs, instead of having ad-hoc code for that in CloneArchive(). And fix some minor bugs like use of strdup() where pg_strdup() would be safer. Back-patch to 9.3, mostly to keep the branches in sync in an area that we're still finding bugs in. Discussion: <5086.1464793073@sss.pgh.pa.us>
* Fix missing abort checks in pg_backup_directory.c.Tom Lane2016-05-29
| | | | | | | | | | | | | Parallel restore from directory format failed to respond to control-C in a timely manner, because there were no checkAborting() calls in the code path that reads data from a file and sends it to the backend. If any worker was in the midst of restoring data for a large table, you'd just have to wait. This fix doesn't do anything for the problem of aborting a long-running server-side command, but at least it fixes things for data transfers. Back-patch to 9.3 where parallel restore was introduced.
* Remove pg_dump/parallel.c's useless "aborting" flag.Tom Lane2016-05-29
| | | | | | | | | | | | | | This was effectively dead code, since the places that tested it could not be reached after we entered the on-exit-cleanup routine that would set it. It seems to have been a leftover from a design in which error abort would try to send fresh commands to the workers --- a design which could never have worked reliably, of course. Since the flag is not cross-platform, it complicates reasoning about the code's behavior, which we could do without. Although this is effectively just cosmetic, back-patch anyway, because there are some actual bugs in the vicinity of this behavior. Discussion: <15583.1464462418@sss.pgh.pa.us>
* Lots of comment-fixing, and minor cosmetic cleanup, in pg_dump/parallel.c.Tom Lane2016-05-28
| | | | | | | | | | | | | | | | | | | | | | | | The commentary in this file was in extremely sad shape. The author(s) had clearly never heard of the project convention that a function header comment should provide an API spec of some sort for that function. Much of it was flat out wrong, too --- maybe it was accurate when written, but if so it had not been updated to track subsequent code revisions. Rewrite and rearrange to try to bring it up to speed, and annotate some of the places where more work is needed. (I've refrained from actually fixing anything of substance ... yet.) Also, rename a couple of functions for more clarity as to what they do, do some very minor code rearrangement, remove some pointless Asserts, fix an incorrect Assert in readMessageFromPipe, and add a missing socket close in one error exit from pgpipe(). The last would be a bug if we tried to continue after pgpipe() failure, but since we don't, it's just cosmetic at present. Although this is only cosmetic, back-patch to 9.3 where parallel.c was added. It's sufficiently invasive that it'll pose a hazard for future back-patching if we don't. Discussion: <25239.1464386067@sss.pgh.pa.us>
* Clean up thread management in parallel pg_dump for Windows.Tom Lane2016-05-27
| | | | | | | | | | | | | | | | | | | | | | | | Since we start the worker threads with _beginthreadex(), we should use _endthreadex() to terminate them. We got this right in the normal-exit code path, but not so much during an error exit from a worker. In addition, be sure to apply CloseHandle to the thread handle after each thread exits. It's not clear that these oversights cause any user-visible problems, since the pg_dump run is about to terminate anyway. Still, it's clearly better to follow Microsoft's API specifications than ignore them. Also a few cosmetic cleanups in WaitForTerminatingWorkers(), including being a bit less random about where to cast between uintptr_t and HANDLE, and being sure to clear the worker identity field for each dead worker (not that false matches should be possible later, but let's be careful). Original observation and patch by Armin Schöffmann, cosmetic improvements by Michael Paquier and me. (Armin's patch also included closing sockets in ShutdownWorkersHard(), but that's been dealt with already in commit df8d2d8c4.) Back-patch to 9.3 where parallel pg_dump was introduced. Discussion: <zarafa.570306bd.3418.074bf1420d8f2ba2@root.aegaeon.de>
* Make pg_dump error cleanly with -j against hot standbyMagnus Hagander2016-05-26
| | | | | | | Getting a synchronized snapshot is not supported on a hot standby node, and is by default taken when using -j with multiple sessions. Trying to do so still failed, but with a server error that would also go in the log. Instead, proprely detect this case and give a better error message.
* Make pg_dump behave more sanely when built without HAVE_LIBZ.Tom Lane2016-05-26
| | | | | | | | | | | | | | | | | | For some reason the code to emit a warning and switch to uncompressed output was placed down in the guts of pg_backup_archiver.c. This is definitely too late in the case of parallel operation (and I rather wonder if it wasn't too late for other purposes as well). Put it in pg_dump.c's option-processing logic, which seems a much saner place. Also, the default behavior with custom or directory output format was to emit the warning telling you the output would be uncompressed. This seems unhelpful, so silence that case. Back-patch to 9.3 where parallel dump was introduced. Kyotaro Horiguchi, adjusted a bit by me Report: <20160526.185551.242041780.horiguchi.kyotaro@lab.ntt.co.jp>
* In Windows pg_dump, ensure idle workers will shut down during error exit.Tom Lane2016-05-26
| | | | | | | | | | | | | | The Windows coding of ShutdownWorkersHard() thought that setting termEvent was sufficient to make workers exit after an error. But that only helps if a worker is busy and passes through checkAborting(). An idle worker will just sit, resulting in pg_dump failing to exit until the user gives up and hits control-C. We should close the write end of the command pipe so that idle workers will see socket EOF and exit, as the Unix coding was already doing. Back-patch to 9.3 where parallel pg_dump was introduced. Kyotaro Horiguchi
* Fix broken error handling in parallel pg_dump/pg_restore.Tom Lane2016-05-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the original design for parallel dump, worker processes reported errors by sending them up to the master process, which would print the messages. This is unworkably fragile for a couple of reasons: it risks deadlock if a worker sends an error at an unexpected time, and if the master has already died for some reason, the user will never get to see the error at all. Revert that idea and go back to just always printing messages to stderr. This approach means that if all the workers fail for similar reasons (eg, bad password or server shutdown), the user will see N copies of that message, not only one as before. While that's slightly annoying, it's certainly better than not seeing any message; not to mention that we shouldn't assume that only the first failure is interesting. An additional problem in the same area was that the master failed to disable SIGPIPE (at least until much too late), which meant that sending a command to an already-dead worker would cause the master to crash silently. That was bad enough in itself but was made worse by the total reliance on the master to print errors: even if the worker had reported an error, you would probably not see it, depending on timing. Instead disable SIGPIPE right after we've forked the workers, before attempting to send them anything. Additionally, the master relies on seeing socket EOF to realize that a worker has exited prematurely --- but on Windows, there would be no EOF since the socket is attached to the process that includes both the master and worker threads, so it remains open. Make archive_close_connection() close the worker end of the sockets so that this acts more like the Unix case. It's not perfect, because if a worker thread exits without going through exit_nicely() the closures won't happen; but that's not really supposed to happen. This has been wrong all along, so back-patch to 9.3 where parallel dump was introduced. Report: <2458.1450894615@sss.pgh.pa.us>
* Do not DROP default roles in pg_dumpall -cStephen Frost2016-05-24
| | | | | | | | | | | When pulling the list of roles to drop, exclude roles whose names begin with "pg_" (as we do when we are dumping the roles out to recreate them). Also add regression tests to cover pg_dumpall -c and this specific issue. Noticed by Rushabh Lathia. Patch by me.
* Qualify table usage in dumpTable() and use regclassStephen Frost2016-05-24
| | | | | | | | | | | | | All of the other tables used in the query in dumpTable(), which is collecting column-level ACLs, are qualified, so we should be qualifying the pg_init_privs, the related sub-select against pg_class and the other queries added by the pg_dump catalog ACLs work. Also, use ::regclass (or ::pg_catalog.regclass, where appropriate) instead of using a poorly constructed query to get the OID for various catalog tables. Issues identified by Noah and Alvaro, patch by me.
* Fix typo in TAP test identification string.Tom Lane2016-05-23
| | | | Michael Paquier
* psql: Message style improvementsPeter Eisentraut2016-05-21
|
* Pin the built-in index access methods.Tom Lane2016-05-19
| | | | | | | | | | | | This was overlooked in commit 473b93287, which introduced DROP ACCESS METHOD. Although that command is restricted to superusers, we don't want even superusers dropping the built-in methods; "DROP ACCESS METHOD btree" in particular is unrecoverable from. Pin these objects in the same way that other initdb-created objects are pinned. I chose to bump catversion for this fix. That's not absolutely necessary perhaps, but it will ensure that no 9.6 production systems are missing the pin entries.
* Translation updatesPeter Eisentraut2016-05-09
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 17bf3e8564abf600274789fcc90e72532d5e7c05
* Wording quibbles regarding initdb usernameStephen Frost2016-05-08
| | | | | | | | | Use disallowed instead of reserved, cannot instead of can not, and double quotes instead of single quotes. Also add a test to cover the bug which started this discussion. Per discussion with Tom.
* Disallow superuser names starting with 'pg_' in initdbStephen Frost2016-05-08
| | | | | | | As with CREATE ROLE, disallow users from specifying initial superuser names which begin with 'pg_' in initdb. Per discussion with Tom.
* In new pg_dump TAP tests, remove trailing "$" from regexps using /m.Tom Lane2016-05-07
| | | | | | | | | | | | | | | | | | | It emerges that some Perl versions before 5.8.9 have a bug with regexps that use the /m flag and contain "$". This is the reason why jacana is still failing on HEAD, and I was able to duplicate the failure on prairiedog's host. There's no real need for "$" in these patterns, since they are already matching through the statement-terminating semicolons (or matching an explicit \n in some cases). So just remove it. Note: the reason jacana hasn't actually reported any failures in the last little while is that the way the pg_dump TAP tests are set up, any failure of this sort results in echoing the entire pg_dump dump output to stderr. Since there were about a hundred such failures, that resulted in a 30MB log file which choked the buildfarm upload script. There is room for improvement here :-(. Per off-list discussion with Andrew and Stephen.
* Clean up after pg_dump test runs.Tom Lane2016-05-06
| | | | | The tmp_check directory needs to be removed by "make clean", and also ignored by .gitignore.
* Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old.Tom Lane2016-05-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch essentially reverts commit 4c6780fd17aa43ed, in favor of a much simpler solution for the case where the new cluster would choose to create a TOAST table but the old cluster doesn't have one: just don't create a TOAST table. The existing code failed in at least two different ways if the situation arose: (1) ALTER TABLE RESET didn't grab an exclusive lock, so that the lock sanity check in create_toast_table failed; (2) pg_upgrade did not provide a pg_type OID for the new toast table, so that the crosscheck in TypeCreate failed. While both these problems were introduced by later patches, they show that the hack being used to cause TOAST table creation is overwhelmingly fragile (and untested). I also note that before the TypeCreate crosscheck was added, the code would have resulted in assigning an indeterminate pg_type OID to the toast table, possibly causing a later OID conflict in that catalog; so that it didn't really work even when committed. If we simply don't create a TOAST table, there will only be a problem if the code tries to store a tuple that's wider than a page, and field compression isn't sufficient to get it under a page. Given that the TOAST creation threshold is intended to be about a quarter of a page, it's very hard to believe that cross-version differences in the do-we-need-a-toast- table heuristic could result in an observable problem. So let's just follow the old version's conclusion about whether a TOAST table is needed. (If we ever do change needs_toast_table() so much that this conclusion doesn't apply, we can devise a solution at that time, and hopefully do it in a less klugy way than 4c6780fd17aa43ed did.) Back-patch to 9.3, like the previous patch. Discussion: <8110.1462291671@sss.pgh.pa.us>
* Disable BLOB test in pg_dump TAP testsStephen Frost2016-05-06
| | | | | | | | | | Buildfarm member jacana appears to have an issue with running this test. It's not entirely clear to me why, but rather than try to fight with it, just disable it for now. None of the other tests try to write out from psql directly as this test does, so it seems likely that the rest of the tests will be fine (as they have been on numerous other systems).
* Correct query in pg_dumpall:dumpRolesStephen Frost2016-05-06
| | | | | | | | We need to use a new branch due to the 9.5 addition of bypassrls when adding in the clause to exclude pg_* roles from being dumped by pg_dumpall. Pointed out by Noah, patch by me.
* Remove MODULES_big from test_pg_dumpStephen Frost2016-05-06
| | | | | | | | | | The Makefile for test_pg_dump shouldn't have a MODULES_big line because there's no actual compiled bit for that extension. Hopefully this will fix the Windows buildfarm members which were complaining. In passing, also add the 'prove_installcheck' bit to the pg_dump and test_pg_dump Makefiles, to get the buildfarm members to actually run those tests.
* Improve pg_upgrade's report about failure to match up old and new tables.Tom Lane2016-05-06
| | | | | | | | | | | | | | | | | | | | Ordinarily, pg_upgrade shouldn't have any difficulty in matching up all the relations it sees in the old and new databases. If it does, however, it just goes belly-up with a pretty unhelpful error message. That seemed fine as long as we expected the case never to occur in the wild, but Alvaro reported that it had been seen in a database whose pg_largeobject table had somehow acquired a TOAST table. That doesn't quite seem like a case that pg_upgrade actually needs to handle, but it would be good if the report were more diagnosable. Hence, extend the logic to print out as much information as we can about the mismatch(es) before we quit. In passing, improve the readability of get_rel_infos()'s data collection query, which had suffered seriously from lets-not-bother-to-update-comments syndrome, and generally was unnecessarily disrespectful to readers. It could be argued that this is a bug fix, but given that we have so few reports, I don't feel a need to back-patch; at least not before this has baked awhile in HEAD.
* Add TAP tests for pg_dumpStephen Frost2016-05-06
| | | | | | | | | | | | | | | | | | This TAP test suite will create a new cluster, populate it based on the 'create_sql' values in the '%tests' hash, run all of the runs defined in the '%pgdump_runs' hash, and then for each test in the '%tests' hash, compare each run's output the the regular expression defined for the test under the 'like' and 'unlike' functions, as appropriate. While this test suite covers a fair bit of ground (67% of pg_dump.c and quite a bit of the other files in src/bin/pg_dump), there is still quite a bit which remains to be added to provide better code coverage. Still, this is quite a bit better than we had, and has found a few bugs already (note that the CREATE TRANSFORM test is commented out, as it is currently failing). Idea for using the TAP system from Tom, though all of the code is mine.
* Only issue LOCK TABLE commands when necessaryStephen Frost2016-05-06
| | | | | | | | | | | | | | Reviewing the cases where we need to LOCK a given table during a dump, it was pointed out by Tom that we really don't need to LOCK a table if we are only looking to dump the ACL for it, or certain other components. After reviewing the queries run for all of the component pieces, a list of components were determined to not require LOCK'ing of the table. This implements a check to avoid LOCK'ing those tables. Initial complaint from Rushabh Lathia, discussed with Robert and Tom, the patch is mine.
* pg_dump performance and other fixesStephen Frost2016-05-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Do not try to dump objects which do not have ACLs when only ACLs are being requested. This results in a significant performance improvement as we can avoid querying for further information on these objects when we don't need to. When limiting the components to dump for an extension, consider what components have been requested. Initially, we incorrectly hard-coded the components of the extension objects to dump, which would mean that we wouldn't dump some components even with they were asked for and in other cases we would dump components which weren't requested. Correct defaultACLs to use 'dump_contains' instead of 'dump'. The defaultACL is considered a member of the namespace and should be dumped based on the same set of components that the other objects in the schema are, not based on what we're dumping for the namespace itself (which might not include ACLs, if the namespace has just the default or initial ACL). Use DUMP_COMPONENT_ACL for from-initdb objects, to allow users to change their ACLs, should they wish to. This just extends what we are doing for the pg_catalog namespace to objects which are not members of namespaces. Due to column ACLs being treated a bit differently from other ACLs (they are actually reset to NULL when all privileges are revoked), adjust the query which gathers column-level ACLs to consider all of the ACL-relevant columns.
* Correct pg_dump WHERE clause for functions/aggregatesStephen Frost2016-05-06
| | | | | | | | | The query to grab the function/aggregate information is now joining to pg_init_privs, so we can simplify (and correct) the WHERE clause used to determine if a given function's ACL has changed from the initial ACL on the function. Bug found by Noah, patch by me.
* Fix pgbench's parsing of double values to notice trailing garbage.Tom Lane2016-05-06
| | | | | Noted by Fabien Coelho, though this isn't exactly his proposed patch. (The technique used here is borrowed from the zic sources.)
* Improve handling of numeric-valued variables in pgbench.Tom Lane2016-05-06
| | | | | | | | | | | | | | | | | | | | | The previous coding always stored variable values as strings, doing conversion on-the-fly when a numeric value was needed or a number was to be assigned. This was a bit inefficient and risked loss of precision for floating-point values. The precision aspect had been hacked around by printing doubles in "%.18e" format, which is ugly and has machine-dependent results. Instead, arrange to preserve an assigned numeric value in the original binary numeric format, converting to string only when and if needed. When we do need to convert a double to string, convert in "%g" format with DBL_DIG precision, which is the standard way to do it and produces the least surprising results in most cases. The implementation supports storing both a string value and a numeric value for any one variable, with lazy conversion between them. I also arranged for lazy re-sorting of the variable array when new variables are added. That was mainly to allow a clean refactoring of putVariable() into two levels of subroutine, but it may allow us to save a few sorts. Discussion: <9188.1462475559@sss.pgh.pa.us>
* Fix psql's \ev and \sv commands so that they handle view reloptions.Dean Rasheed2016-05-06
| | | | | | | | | | | Commit 8eb6407aaeb6cbd972839e356b436bb698f51cff added support for editing and showing view definitions, but neglected to account for view options such as security_barrier and WITH CHECK OPTION which are not returned by pg_get_viewdef() and so need special handling. Author: Dean Rasheed Reviewed-by: Peter Eisentraut Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
* Move and rename fmtReloptionsArray().Dean Rasheed2016-05-06
| | | | | | | | | | | | Move fmtReloptionsArray() from pg_dump.c to string_utils.c so that it is available to other frontend code. In particular psql's \ev and \sv commands need it to handle view reloptions. Also rename the function to appendReloptionsArray(), which is a more accurate description of what it does. Author: Dean Rasheed Reviewed-by: Peter Eisentraut Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
* Rename pgbench min/max to least/greatest, and fix handling of double args.Tom Lane2016-05-05
| | | | | | | | | | | | These functions behave like the backend's least/greatest functions, not like min/max, so the originally-chosen names invite confusion. Per discussion, rename to least/greatest. I also took it upon myself to make them return double if any input is double. The previous behavior of silently coercing all inputs to int surely does not meet the principle of least astonishment. Copy-edit some of the other new functions' documentation, too.
* Support building with Visual Studio 2015Andrew Dunstan2016-04-29
| | | | | | | | | | | Adjust the way we detect the locale. As a result the minumum Windows version supported by VS2015 and later is Windows Vista. Add some tweaks to remove new compiler warnings. Remove documentation references to the now obsolete msysGit. Michael Paquier, somewhat edited by me, reviewed by Christian Ullrich. Backpatch to 9.5
* pg_dump: Message style improvementsPeter Eisentraut2016-04-26
| | | | forgotten in b6dacc173b6830c515d970698cead9a85663c553
* pg_dump: Message style improvementsPeter Eisentraut2016-04-25
|
* Update GETTEXT_FILES after config and controldata refactoringPeter Eisentraut2016-04-24
|
* Add pg_dump support for the new PARALLEL option for aggregates.Robert Haas2016-04-20
| | | | | | This was an oversight in commit 41ea0c23761ca108e2f08f6e3151e3cb1f9652a1. FabrĂ­zio de Royes Mello, per a report from Tushar Ahuja
* Avoid code duplication in \crosstabview.Tom Lane2016-04-17
| | | | | | In commit 6f0d6a507 I added a duplicate copy of psqlscanslash's identifier downcasing code, but actually it's not hard to split that out as a callable subroutine and avoid the duplication.
* psql: Add new gettext triggerPeter Eisentraut2016-04-15
|
* Rethink \crosstabview's argument parsing logic.Tom Lane2016-04-14
| | | | | | | | | | | | | \crosstabview interpreted its arguments in an unusual way, including doing case-insensitive matching of unquoted column names, which is surely not the right thing. Rip that out in favor of doing something equivalent to the dequoting/case-folding rules used by other psql commands. To keep it simple, change the syntax so that the optional sort column is specified as a separate argument, instead of the also-quite-unusual syntax that attached it to the colH argument with a colon. Also, rework the error messages to be closer to project style.
* Fix pg_dump so pg_upgrade'ing an extension with simple opfamilies works.Tom Lane2016-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As reported by Michael Feld, pg_upgrade'ing an installation having extensions with operator families that contain just a single operator class failed to reproduce the extension membership of those operator families. This caused no immediate ill effects, but would create problems when later trying to do a plain dump and restore, because the seemingly-not-part-of- the-extension operator families would appear separately in the pg_dump output, and then would conflict with the families created by loading the extension. This has been broken ever since extensions were introduced, and many of the standard contrib extensions are affected, so it's a bit astonishing nobody complained before. The cause of the problem is a perhaps-ill-considered decision to omit such operator families from pg_dump's output on the grounds that the CREATE OPERATOR CLASS commands could recreate them, and having explicit CREATE OPERATOR FAMILY commands would impede loading the dump script into pre-8.3 servers. Whatever the merits of that decision when 8.3 was being written, it looks like a poor tradeoff now. We can fix the pg_upgrade problem simply by removing that code, so that the operator families are dumped explicitly (and then will be properly made to be part of their extensions). Although this fixes the behavior of future pg_upgrade runs, it does nothing to clean up existing installations that may have improperly-linked operator families. Given the small number of complaints to date, maybe we don't need to worry about providing an automated solution for that; anyone who needs to clean it up can do so with manual "ALTER EXTENSION ADD OPERATOR FAMILY" commands, or even just ignore the duplicate-opfamily errors they get during a pg_restore. In any case we need this fix. Back-patch to all supported branches. Discussion: <20228.1460575691@sss.pgh.pa.us>