aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix interaction of parallel query with prepared statements.Robert Haas2016-12-06
| | | | | | | | | | | | | | | | | Previously, a prepared statement created via a Parse message could get a parallel plan, but one created with a PREPARE statement could not. This state of affairs was due to confusion on my (rhaas) part: I erroneously believed that a CREATE TABLE .. AS EXECUTE statement could only be performed with a prepared statement by PREPARE, but in fact one created by a Prepare message works just as well. Therefore, it makes no sense to allow parallel query in one case but not the other. To fix, allow parallel query with all prepared statements, but run the parallel plan serially (i.e. without workers) in the case of CREATE TABLE .. AS EXECUTE. Also, document this. Amit Kapila and Tobias Bussman, plus an extra sentence of documentation by me.
* Bump catversion for restrictive RLS changesStephen Frost2016-12-06
| | | | | | Mea culpa. Pointed out by Andres.
* Remove extraneous semicolon from uses of relptr_declare().Tom Lane2016-12-05
| | | | | | | | | | | | If we're going to write a semicolon after calls of relptr_declare(), then we don't need one inside the macro, and removing it suppresses "empty declaration" warnings from pickier compilers (eg pademelon). While at it, we might as well use relptr() inside relptr_declare(), because otherwise that macro would likely go unused altogether. Also improve the comment, which I for one found unclear, and provide a specific example of intended usage.
* Ensure gatherstate->nextreader is properly initialized.Robert Haas2016-12-05
| | | | | | | | | The previously code worked OK as long as a Gather node was never rescanned, or if it was rescanned, as long as it got at least as many workers on rescan as it had originally. But if the number of workers ever decreased on a rescan, then it could crash. Andreas Seltenreich
* Add support for restrictive RLS policiesStephen Frost2016-12-05
| | | | | | | | | | | | | | | | We have had support for restrictive RLS policies since 9.5, but they were only available through extensions which use the appropriate hooks. This adds support into the grammer, catalog, psql and pg_dump for restrictive RLS policies, thus reducing the cases where an extension is necessary. In passing, also move away from using "AND"d and "OR"d in comments. As pointed out by Alvaro, it's not really appropriate to attempt to make verbs out of "AND" and "OR", so reword those comments which attempted to. Reviewed By: Jeevan Chalke, Dean Rasheed Discussion: https://postgr.es/m/20160901063404.GY4028@tamriel.snowman.net
* dsa: Cope with the possibility that SIZE_MAX is not defined.Robert Haas2016-12-05
| | | | Per buildfarm member gaur and Tom Lane.
* libpq: Fix another bug in 721f7bd3cbccaf8c07cad2707826b83f84694832.Robert Haas2016-12-05
| | | | | | | | If we failed to connect to one or more hosts, and then afterwards we find one that fails to be read-write, the latter error message was clobbering any earlier ones. Repair. Mithun Cy, slightly revised by me.
* Fix race introduced by 6d46f4783efe457f74816a75173eb23ed8930020.Robert Haas2016-12-05
| | | | | | | | | It's possible for the metapage contents to change after we release the lock, so we must read them before releasing the lock. Amit Kapila. Submitted in response to a trouble report from Andreas Seltenreich, though it is not certain this fixes the problem.
* Reduce the default for max_worker_processes back to 8.Robert Haas2016-12-05
| | | | | | | | Commit b460f5d6693103076dc554aa7cbb96e1e53074f9 -- at my suggestion -- increased the default value of max_worker_processes from 8 to 16, on the theory that this would be harmless and convenient for users. Unfortunately, this caused some buildfarm machines with low connection limits to start failing, so apparently it's not harmless after all.
* Fix more DSA problems uncovered by the buildfarm.Robert Haas2016-12-05
| | | | | | | | | | On 32-bit systems, don't try to use 64-bit DSA pointers, because the computation of DSA_MAX_SEGMENT_SIZE overflows Size. Cast 1 to Size before shifting it, so that the compiler doesn't produce a result of the wrong width. In passing, change one use of size_t to Size.
* Try to fix some DSA-related compiler warnings.Robert Haas2016-12-05
| | | | | | | | | Commit 13df76a537cca3b8884911d8fdf7c89a457a8dd3 was overconfident about how portable %016lx is. Some compilers complain because they need %016llx, while platforms where DSA pointers are only 32 bits get unhappy about using a 64-bit format for a 32-bit quantity. Thomas Munro, per an off-list suggestion from me.
* Replace PostmasterRandom() with a stronger source, second attempt.Heikki Linnakangas2016-12-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a new routine, pg_strong_random() for generating random bytes, for use in both frontend and backend. At the moment, it's only used in the backend, but the upcoming SCRAM authentication patches need strong random numbers in libpq as well. pg_strong_random() is based on, and replaces, the existing implementation in pgcrypto. It can acquire strong random numbers from a number of sources, depending on what's available: - OpenSSL RAND_bytes(), if built with OpenSSL - On Windows, the native cryptographic functions are used - /dev/urandom Unlike the current pgcrypto function, the source is chosen by configure. That makes it easier to test different implementations, and ensures that we don't accidentally fall back to a less secure implementation, if the primary source fails. All of those methods are quite reliable, it would be pretty surprising for them to fail, so we'd rather find out by failing hard. If no strong random source is available, we fall back to using erand48(), seeded from current timestamp, like PostmasterRandom() was. That isn't cryptographically secure, but allows us to still work on platforms that don't have any of the above stronger sources. Because it's not very secure, the built-in implementation is only used if explicitly requested with --disable-strong-random. This replaces the more complicated Fortuna algorithm we used to have in pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom, so it doesn't seem worth the maintenance effort to keep that. pgcrypto functions that require strong random numbers will be disabled with --disable-strong-random. Original patch by Magnus Hagander, tons of further work by Michael Paquier and me. Discussion: https://www.postgresql.org/message-id/CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqRWkNYRRPJA7-cF+LfroYV10pvjdz6GNvxk-Eee9FypKA@mail.gmail.com
* Fix incorrect output from gin_desc().Fujii Masao2016-12-05
| | | | | | | | | | | | | | | | | | | | | Previously gin_desc() displayed incorrect output "unknown action 0" for XLOG_GIN_INSERT and XLOG_GIN_VACUUM_DATA_LEAF_PAGE records with valid actions. The cause of this problem was that gin_desc() wrongly used XLogRecGetData() to extract data from those records. Since they were registered by XLogRegisterBufData(), gin_desc() should have used XLogRecGetBlockData(), instead, like gin_redo(). Also there were other differences about how to treat XLOG_GIN_INSERT record between gin_desc() and gin_redo(). This commit fixes gin_desc() routine so that it treats those records in the same way as gin_redo(). Batch-patch to 9.5 where WAL record format was revamped and XLogRegisterBufData() was added. Reported-By: Andres Freund Reviewed-By: Tom Lane Discussion: <20160509194645.7lewnpw647zegx2m@alap3.anarazel.de>
* Don't mess up pstate->p_next_resno in transformOnConflictClause().Tom Lane2016-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | transformOnConflictClause incremented p_next_resno while generating the phony targetlist for the EXCLUDED pseudo-rel. Then that field got incremented some more during transformTargetList, possibly leading to free_parsestate concluding that we'd overrun the allowed length of a tlist, as reported by Justin Pryzby. We could fix this by resetting p_next_resno to 1 after using it for the EXCLUDED pseudo-rel tlist, but it seems easier and less coupled to other places if we just don't use that field at all in this loop. (Note that this doesn't change anything about the resnos that end up appearing in the main target list, because those are all replaced with target-column numbers by updateTargetListEntry.) In passing, fix incorrect type OID assigned to the whole-row Var for "EXCLUDED.*" (somehow this escaped having any bad consequences so far, but it's certainly wrong); remove useless assignment to var->location; pstrdup the column names in case of a relcache flush; and improve nearby comments. Back-patch to 9.5 where ON CONFLICT was introduced. Report: https://postgr.es/m/20161204163237.GA8030@telsasoft.com
* Document recipe for testing compatibility with old Perl.Noah Misch2016-12-04
| | | | Craig Ringer, reviewed by Kyotaro HORIGUCHI and Michael Paquier.
* Make pgwin32_putenv() probe every known CRT, regardless of compiler.Noah Misch2016-12-04
| | | | | | | This extends to MinGW builds the provision for MSVC-built libraries to see putenv() effects. Doing so repairs, for example, the handling of the krb_server_keyfile parameter when linked with MSVC-built MIT Kerberos. Like the previous commit, no back-patch.
* Make pgwin32_putenv() follow DLL loading and unloading.Noah Misch2016-12-03
| | | | | | | | | | | | | | | | | | | | | | | Until now, the first putenv() call of a given postgres.exe process would cache the set of loaded CRTs. If a CRT unloaded after that call, the next putenv() would crash. That risk was largely theoretical, because the first putenv() precedes all PostgreSQL-initiated module loading. However, this might explain bad interactions with antivirus and other software that injects threads asynchronously. If an additional CRT loaded after the first putenv(), pgwin32_putenv() would not discover it. That CRT would have all environment changes predating its load, but it would not receive later PostgreSQL-initiated changes. An additional CRT loading concurrently with the first putenv() might miss that change in addition to missing later changes. Fix all those problems. This removes the cache mechanism from pgwin32_putenv(); the cost, less than 100 μs per backend startup, is negligible. No resulting misbehavior was known to be user-visible given the core distribution alone, but one can readily construct an affected extension module. No back-patch given the lack of complaints and the potential for behavior changes in non-PostgreSQL code running in the backend. Christian Ullrich, reviewed by Michael Paquier.
* Make pgwin32_putenv() visit debug CRTs.Noah Misch2016-12-03
| | | | | | | | | | This has no effect in the most conventional case, where no relevant DLL uses a debug build. For an example where it does matter, given a debug build of MIT Kerberos, the krb_server_keyfile parameter usually had no effect. Since nobody wants a Heisenbug, back-patch to 9.2 (all supported versions). Christian Ullrich, reviewed by Michael Paquier.
* Remove wrong CloseHandle() call.Noah Misch2016-12-03
| | | | | | | | | | | In accordance with its own documentation, invoke CloseHandle() only when directed in the documentation for the function that furnished the handle. GetModuleHandle() does not so direct. We have been issuing this call only in the rare event that a CRT DLL contains no "_putenv" symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all supported versions). Christian Ullrich, reviewed by Michael Paquier.
* Refine win32env.c cosmetics.Noah Misch2016-12-03
| | | | | | | | | Replace use of plain 0 as a null pointer constant. In comments, update terminology and lessen redundancy. Back-patch to 9.2 (all supported versions) for the convenience of back-patching the next two commits. Christian Ullrich and Noah Misch, reviewed (in earlier versions) by Michael Paquier.
* Fix broken wait-for-previous-process-to-exit loop in regression test.Tom Lane2016-12-02
| | | | | | Must do pg_stat_clear_snapshot() inside test's loop, or our snapshot of pg_stat_activity will never change :-(. Thinko in b3427dade -- evidently my workstation never really iterated the loop in testing. Per buildfarm.
* Fix thinko in b3427dade14cc31eb48740bc9ea98b5954470b24.Robert Haas2016-12-02
|
* Delete deleteWhatDependsOn() in favor of more performDeletion() flag bits.Tom Lane2016-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | deleteWhatDependsOn() had grown an uncomfortably large number of assumptions about what it's used for. There are actually only two minor differences between what it does and what a regular performDeletion() call can do, so let's invent additional bits in performDeletion's existing flags argument that specify those behaviors, and get rid of deleteWhatDependsOn() as such. (We'd probably have done it this way from the start, except that performDeletion didn't originally have a flags argument, IIRC.) Also, add a SKIP_EXTENSIONS flag bit that prevents ever recursing to an extension, and use that when dropping temporary objects at session end. This provides a more general solution to the problem addressed in a hacky way in commit 08dd23cec: if an extension script creates temp objects and forgets to remove them again, the whole extension went away when its contained temp objects were deleted. The previous solution only covered temp relations, but this solves it for all object types. These changes require minor additions in dependency.c to pass the flags to subroutines that previously didn't get them, but it's still a net savings of code, and it seems cleaner than before. Having done this, revert the special-case code added in 08dd23cec that prevented addition of pg_depend records for temp table extension membership, because that caused its own oddities: dropping an extension that had created such a table didn't automatically remove the table, leading to a failure if the table had another dependency on the extension (such as use of an extension data type), or to a duplicate-name failure if you then tried to recreate the extension. But we keep the part that prevents the pg_temp_nnn schema from becoming an extension member; we never want that to happen. Add a regression test case covering these behaviors. Although this fixes some arguable bugs, we've heard few field complaints, and any such problems are easily worked around by explicitly dropping temp objects at the end of extension scripts (which seems like good practice anyway). So I won't risk a back-patch. Discussion: https://postgr.es/m/e51f4311-f483-4dd0-1ccc-abec3c405110@BlueTreble.com
* Introduce dynamic shared memory areas.Robert Haas2016-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | Programmers discovered decades ago that it was useful to have a simple interface for allocating and freeing memory, which is why malloc() and free() were invented. Unfortunately, those handy tools don't work with dynamic shared memory segments because those are specific to PostgreSQL and are not necessarily mapped at the same address in every cooperating process. So invent our own allocator instead. This makes it possible for processes cooperating as part of parallel query execution to allocate and free chunks of memory without having to reserve them prior to the start of execution. It could also be used for longer lived objects; for example, we could consider storing data for pg_stat_statements or the stats collector in shared memory using these interfaces, rather than writing them to files. Basically, anything that needs shared memory but can't predict in advance how much it's going to need might find this useful. Thomas Munro and Robert Haas. The original code (of mine) on which Thomas based his work was actually designed to be a new backend-local memory allocator for PostgreSQL, but that hasn't gone anywhere - or not yet, anyway. Thomas took that work and performed major refactoring and extensive modifications to make it work with dynamic shared memory, including the addition of appropriate locking. Discussion: CA+TgmobkeWptGwiNa+SGFWsTLzTzD-CeLz0KcE-y6LFgoUus4A@mail.gmail.com Discussion: CAEepm=1z5WLuNoJ80PaCvz6EtG9dN0j-KuHcHtU6QEfcPP5-qA@mail.gmail.com
* Management of free memory pages.Robert Haas2016-12-02
| | | | | | | | | | | | | | | | | This is intended as infrastructure for a full-fledged allocator for dynamic shared memory. The interface looks a bit like a real allocator, but only supports allocating and freeing memory in multiples of the 4kB page size. Further, to free memory, you must know the size of the span you wish to free, in pages. While these are make it unsuitable as an allocator in and of itself, it still serves as very useful scaffolding for a full-fledged allocator. Robert Haas and Thomas Munro. This code is mostly the same as my 2014 submission, but Thomas fixed quite a few bugs and made some changes to the interface. Discussion: CA+TgmobkeWptGwiNa+SGFWsTLzTzD-CeLz0KcE-y6LFgoUus4A@mail.gmail.com Discussion: CAEepm=1z5WLuNoJ80PaCvz6EtG9dN0j-KuHcHtU6QEfcPP5-qA@mail.gmail.com
* Add a crude facility for dealing with relative pointers.Robert Haas2016-12-02
| | | | | | | | | | | | | | | | | | C doesn't have any sort of built-in understanding of a pointer relative to some arbitrary base address, but dynamic shared memory segments can be mapped at different addresses in different processes, so any sort of shared data structure stored within a dynamic shared memory segment can't use absolute pointers. We could use something like Size to represent a relative pointer, but then the compiler provides no type-checking. Use stupid macro tricks to get some type-checking. Patch originally by me. Concept suggested by Andres Freund. Recently resubmitted as part of Thomas Munro's work on dynamic shared memory allocation. Discussion: 20131205144434.GG12398@alap2.anarazel.de Discussion: CAEepm=1z5WLuNoJ80PaCvz6EtG9dN0j-KuHcHtU6QEfcPP5-qA@mail.gmail.com
* Fix outdated commentsAlvaro Herrera2016-12-02
| | | | | | | Commit 597a87ccc9a6f neglected to update some comments; fix. Report and patch by Thomas Munro. Reviewed by Petr Jelínek.
* Add max_parallel_workers GUC.Robert Haas2016-12-02
| | | | | | | | | | Increase the default value of the existing max_worker_processes GUC from 8 to 16, and add a new max_parallel_workers GUC with a maximum of 8. This way, even if the maximum amount of parallel query is happening, there is still room for background workers that do other things, as originally envisioned when max_worker_processes was added. Julien Rouhaud, reviewed by Amit Kapila and by revised by me.
* Fix Windows build for 78c8c814390fAlvaro Herrera2016-12-02
| | | | Author: Petr Jelínek
* Permit dump/reload of not-too-large >1GB tuplesAlvaro Herrera2016-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our documentation states that our maximum field size is 1 GB, and that our maximum row size of 1.6 TB. However, while this might be attainable in theory with enough contortions, it is not workable in practice; for starters, pg_dump fails to dump tables containing rows larger than 1 GB, even if individual columns are well below the limit; and even if one does manage to manufacture a dump file containing a row that large, the server refuses to load it anyway. This commit enables dumping and reloading of such tuples, provided two conditions are met: 1. no single column is larger than 1 GB (in output size -- for bytea this includes the formatting overhead) 2. the whole row is not larger than 2 GB There are three related changes to enable this: a. StringInfo's API now has two additional functions that allow creating a string that grows beyond the typical 1GB limit (and "long" string). ABI compatibility is maintained. We still limit these strings to 2 GB, though, for reasons explained below. b. COPY now uses long StringInfos, so that pg_dump doesn't choke trying to emit rows longer than 1GB. c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation for the input tuple, which means that large tuples are accepted on input. Note that at this point we do not apply any further limit to the input tuple size. The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit length words to describe each row; and because the documentation is ambiguous on its signedness and libpq does consider it signed, we cannot use the highest-order bit. Additionally, the StringInfo API uses "int" (which is 4 bytes wide in most platforms) in many places, so we'd need to change that API too in order to improve, which has lots of fallout. Backpatch to 9.5, which is the oldest that has MemoryContextAllocExtended, a necessary piece of infrastructure. We could apply to 9.4 with very minimal additional effort, but any further than that would require backpatching "huge" allocations too. This is the largest set of changes we could find that can be back-patched without breaking compatibility with existing systems. Fixing a bigger set of problems (for example, dumping tuples bigger than 2GB, or dumping fields bigger than 1GB) would require changing the FE/BE protocol and/or changing the StringInfo API in an ABI-incompatible way, neither of which would be back-patchable. Authors: Daniel Vérité, Álvaro Herrera Reviewed by: Tomas Vondra Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
* Refactor libpqwalreceiverPeter Eisentraut2016-12-01
| | | | | | | | | | | | | | | | The whole walreceiver API is now wrapped into a struct, like most of our other loadable module APIs. The libpq connection is no longer a global variable in libpqwalreceiver. Instead, it is encapsulated into a struct that is passed around the functions. This allows multiple walreceivers to run at the same time. Add some rudimentary support for logical replication connections to libpqwalreceiver. These changes are mostly cosmetic and are going to be useful for the future logical replication patches. From: Petr Jelinek <petr@2ndquadrant.com>
* Use latch instead of select() in walreceiverPeter Eisentraut2016-12-01
| | | | | | | | | Replace use of poll()/select() by WaitLatchOrSocket(), which is more portable and flexible. Also change walreceiver to use its procLatch instead of a custom latch. From: Petr Jelinek <petr@2ndquadrant.com>
* Add aggregate_with_argtypes and use it consistentlyPeter Eisentraut2016-12-01
| | | | | | | | This works like function_with_argtypes, but aggregates allow slightly different arguments. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Move function_with_argtypes to a better locationPeter Eisentraut2016-12-01
| | | | | | | | It was apparently added for use by GRANT/REVOKE, but move it closer to where other function signature related things are kept. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Use grammar symbol function_with_argtypes consistentlyPeter Eisentraut2016-12-01
| | | | | | | | | Instead of sometimes referring to a function signature like func_name func_args, use the existing function_with_argtypes symbol, which combines the two. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* libpq: Fix inadvertent change in PQhost() behavior.Robert Haas2016-12-01
| | | | | | | | | | | Commit 274bb2b3857cc987cfa21d14775cae9b0dababa5 caused PQhost() to return the value of the hostaddr parameter rather than the relevant host when the latter parameter was specified. That's wrong. Commit 9a1d0af4ad2cbd419115b453d811c141b80d872b then amplified the damage by using PQhost() in more places, so that the SSL test suite started failing. Report by Andreas Karlsson; patch by me.
* User narrower representative tuples in the hash-agg hashtable.Andres Freund2016-11-30
| | | | | | | | | | | | | | | | | | | So far the hashtable stored representative tuples in the form of its input slot, with all columns in the hashtable that are not needed (i.e. not grouped upon or functionally dependent) set to NULL. Thats good for saving memory, but it turns out that having tuples full of NULL isn't free. slot_deform_tuple is faster if there's no NULL bitmap even if no NULLs are encountered, and skipping over leading NULLs isn't free. So compute a separate tuple descriptor that only contains the needed columns. As columns have already been moved in/out the slot for the hashtable that does not imply additional per-row overhead. Author: Andres Freund Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/20161103110721.h5i5t5saxfk5eeik@alap3.anarazel.de
* Perform one only projection to compute agg arguments.Andres Freund2016-11-30
| | | | | | | | | | | | | | | | Previously we did a ExecProject() for each individual aggregate argument. That turned out to be a performance bottleneck in queries with multiple aggregates. Doing all the argument computations in one ExecProject() is quite a bit cheaper because ExecProject's fastpath can do the work at once in a relatively tight loop, and because it can get all the required columns with a single slot_getsomeattr and save some other redundant setup costs. Author: Andres Freund Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/20161103110721.h5i5t5saxfk5eeik@alap3.anarazel.de
* Improve hash index bucket split behavior.Robert Haas2016-11-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the right to split a bucket was represented by a heavyweight lock on the page number of the primary bucket page. Unfortunately, this meant that every scan needed to take a heavyweight lock on that bucket also, which was bad for concurrency. Instead, use a cleanup lock on the primary bucket page to indicate the right to begin a split, so that scans only need to retain a pin on that page, which is they would have to acquire anyway, and which is also much cheaper. In addition to reducing the locking cost, this also avoids locking out scans and inserts for the entire lifetime of the split: while the new bucket is being populated with copies of the appropriate tuples from the old bucket, scans and inserts can happen in parallel. There are minor concurrency improvements for vacuum operations as well, though the situation there is still far from ideal. This patch also removes the unworldly assumption that a split will never be interrupted. With the new code, a split is done in a series of small steps and the system can pick up where it left off if it is interrupted prior to completion. While this patch does not itself add write-ahead logging for hash indexes, it is clearly a necessary first step, since one of the things that could interrupt a split is the removal of electrical power from the machine performing it. Amit Kapila. I wrote the original design on which this patch is based, and did a good bit of work on the comments and README through multiple rounds of review, but all of the code is Amit's. Also reviewed by Jesper Pedersen, Jeff Janes, and others. Discussion: http://postgr.es/m/CAA4eK1LfzcZYxLoXS874Ad0+S-ZM60U9bwcyiUZx9mHZ-KCWhw@mail.gmail.com
* Make all unicode perl scripts to use strict, rearrange logic for clarity.Heikki Linnakangas2016-11-30
| | | | | | | The loops were a bit difficult to understand, due to breaking out of them early. Also fix things that perlcritic complained about. Daniel Gustafsson
* Rewrite the perl scripts to produce our Unicode conversion tables.Heikki Linnakangas2016-11-30
| | | | | | | | | | | | | | | | | Generate EUC_CN mappings from gb-18030-2000.xml, because GB2312.TXT is no longer available. Get UHC from windows-949-2000.xml, it's more up-to-date. Plus tons more small changes. With these changes, the perl scripts faithfully produce the *.map files we have in the repository, from the external source files. In the passing, fix the Makefile to also download CP932.TXT and CP950.TXT. Based on patches by Kyotaro Horiguchi, reviewed by Daniel Gustafsson. Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
* Remove leading zeros, for consistency with other map files.Heikki Linnakangas2016-11-30
| | | | | | | | | | | The common style is to pad to 4 digits. Running the current perl scripts to generate these map files would override this change, but the next commit will rewrite the perl scripts to produce this style. I'm doing this as a separate commit, to make it more clear what non-cosmetic changes the next commit makes to the map files. Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
* Remove code points < 0x80 from character conversion tables.Heikki Linnakangas2016-11-30
| | | | | | | | | | | | | | | | | PostgreSQL treats characters with < 0x80 leading byte as plain ASCII, and they are not even passed to the conversion routines. There is no point in having them in the conversion tables. Everything in the tables were direct ASCII-ASCII mappings, except for two: * SHIFT_JIS_2004 code point 0x5C (backslash in ASCII) was mapped to Unicode YEN SIGN character. * Unicode 0x5C (backslash again) was mapped to "REVERSE SOLIDUS" in SHIFT_JIS_2004 These mappings never had any effect, so there's no functional change from removing them. Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
* Fix bogus handling of JOIN_UNIQUE_OUTER/INNER cases for parallel joins.Tom Lane2016-11-29
| | | | | | | | | | | | | | | | | | | | consider_parallel_nestloop passed the wrong jointype down to its subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it thought that it could pass paths other than innerrel->cheapest_total_path to create_unique_path, which create_unique_path is not on board with. These bugs would lead to assertion failures or other errors, suggesting that this code path hasn't been tested much. hash_inner_and_outer's code for parallel join effectively treated both JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for different reasons :-(), leading to incorrect plans that treated a semijoin as if it were a plain join. Michael Day submitted a test case demonstrating that hash_inner_and_outer failed for JOIN_UNIQUE_OUTER, and I found the other cases through code review. Report: https://postgr.es/m/D0E8A029-D1AC-42E8-979A-5DE4A77E4413@rcmail.com
* Improve eqjoinsel_semi's behavior for small inner relations with no stats.Tom Lane2016-11-29
| | | | | | | | | | | | | | | | | | | | If we don't have any MCV statistics for the inner relation, and we don't trust its numdistinct estimate either, eqjoinsel_semi falls back to a very conservative estimate (that 50% of the outer rows have matches). This is particularly problematic if the inner relation is completely empty, since then even an explicit ANALYZE won't produce any pg_statistic entries, so there's no way to budge the planner off the bad estimate. We'd produce a better estimate in such cases if we used the nd2/nd1 selectivity heuristic, so an easy fix is to treat the nd2 estimate as non-default if we derive it from clamping to the inner rel's rowcount estimate. This won't fix every related case (mainly because the rowcount estimate might be larger than DEFAULT_NUM_DISTINCT), but it seems like a sane extension of the existing logic, so let's apply the change in HEAD and see if anyone complains. Per bug #14438 from Nikolay Nikitin. Report: https://postgr.es/m/20161128182113.6527.58926@wrigleys.postgresql.org Discussion: https://postgr.es/m/31089.1480384713@sss.pgh.pa.us
* Straighten out some whitespacePeter Eisentraut2016-11-29
|
* Add uuid to the set of types supported by contrib/btree_gist.Tom Lane2016-11-29
| | | | | | | | Paul Jungwirth, reviewed and hacked on by Teodor Sigaev, Ildus Kurbangaliev, Adam Brusselback, Chris Bandy, and myself. Discussion: https://postgr.es/m/CA+renyUEE29=X01JXdz8_TQvo6n9=2XoEBBRnQ8rkLyr+kjPxQ@mail.gmail.com Discussion: https://postgr.es/m/55F6EE82.8080209@sigaev.ru
* libpq: Add target_session_attrs parameter.Robert Haas2016-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 274bb2b3857cc987cfa21d14775cae9b0dababa5 made it possible to specify multiple IPs in a connection string, but that's not good enough for the case where you have a read-write master and a bunch of read-only standbys and want to connect to whichever server is the master at the current time. This commit allows that, by making it possible to specify target_session_attrs=read-write as a connection parameter. There was extensive discussion of the best name for the connection parameter and its values as well as the best way to distinguish master and standbys. For now, adopt the same solution as JDBC: if the user wants a read-write connection, issue 'show transaction_read_only' and rejection the connection if the result is 'on'. In the future, we could add additional values of this new target_session_attrs parameter that issue different queries; or we might have some way of distinguishing the server type without resorting to an SQL query; but right now, we have this, and that's (hopefully) a good start. Victor Wagner and Mithun Cy. Design review by Álvaro Herrera, Catalin Iacob, Takayuki Tsunakawa, and Craig Ringer; code review by me. I changed Mithun's patch to skip all remaining IPs for a host if we reject a connection based on this new parameter, rewrote the documentation, and did some other cosmetic cleanup. Discussion: http://postgr.es/m/CAD__OuhqPRGpcsfwPHz_PDqAGkoqS1UvnUnOnAB-LBWBW=wu4A@mail.gmail.com
* Add --no-blobs option to pg_dumpStephen Frost2016-11-29
| | | | | | | | | | | | Add an option to exclude blobs when running pg_dump. By default, blobs are included but this option can be used to exclude them while keeping the rest of the dump. Commment updates and regression tests from me. Author: Guillaume Lelarge Reviewed-by: Amul Sul Discussion: https://postgr.es/m/VisenaEmail.48.49926ea6f91dceb6.15355a48249@tc7-visena
* Fix incorrect variable type in set_rel_consider_parallel().Tom Lane2016-11-29
| | | | | | func_parallel() returns char not Oid. Harmless, but still wrong. Amit Langote