aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Allow background workers to bypass datallowconnMagnus Hagander2018-04-05
| | | | | | | THis adds a "flags" field to the BackgroundWorkerInitializeConnection() and BackgroundWorkerInitializeConnectionByOid(). For now only one flag, BGWORKER_BYPASS_ALLOWCONN, is defined, which allows the worker to ignore datallowconn.
* Add websearch_to_tsqueryTeodor Sigaev2018-04-05
| | | | | | | | | | | | Error-tolerant conversion function with web-like syntax for search query, it simplifies constraining search engine with close to habitual interface for users. Bump catalog version Authors: Victor Drobny, Dmitry Ivanov with editorization by me Reviewed by: Aleksander Alekseev, Tomas Vondra, Thomas Munro, Aleksandr Parfenov Discussion: https://www.postgresql.org/message-id/flat/fe931111ff7e9ad79196486ada79e268@postgrespro.ru
* Add missing includeAlvaro Herrera2018-04-05
| | | | | | Newly added prototype broke cpluspluscheck. Minor buglet in commit 8694cc96b52a.
* Fix handling of non-upgraded B-tree metapagesTeodor Sigaev2018-04-05
| | | | | | | | | | | | 857f9c36 bumps B-tree metapage version while upgrade is performed "on the fly" when needed. However, some asserts fired when old version metapage was cached to rel->rd_amcache. Despite new metadata fields are never used from rel->rd_amcache, that needs to be fixed. This patch introduces metadata upgrade during its caching, which fills unavailable fields with their default values. contrib/pageinspect is also patched to handle non-upgraded metapages in the same way. Author: Alexander Korotkov
* MERGE minor errataSimon Riggs2018-04-05
|
* MERGE fix variable warning in non-assert buildsSimon Riggs2018-04-05
| | | | Author: Jesper Pedersen
* Remove unused vars and mark assert-only varsTeodor Sigaev2018-04-05
| | | | Kyotaro HORIGUCHI
* Fix typoTeodor Sigaev2018-04-05
| | | | Masahiko Sawada
* MERGE post-commit reviewSimon Riggs2018-04-05
| | | | | | | | | | | | | | | | Review comments from Andres Freund * Consolidate code into AfterTriggerGetTransitionTable() * Rename nodeMerge.c to execMerge.c * Rename nodeMerge.h to execMerge.h * Move MERGE handling in ExecInitModifyTable() into a execMerge.c ExecInitMerge() * Move mt_merge_subcommands flags into execMerge.h * Rename opt_and_condition to opt_merge_when_and_condition * Wordsmith various comments Author: Pavan Deolasee Reviewer: Simon Riggs
* Install errcodes.txt for use by extensions.Andrew Gierth2018-04-05
| | | | | | | | | | Maintainers of out-of-tree PLs typically need access to the set of error codes. To avoid the need to duplicate that information in some form in PL source trees, provide errcodes.txt as part of a server installation. Thomas Munro, based on a suggestion from Andrew Gierth Discussion: https://postgr.es/m/87woykk7mu.fsf%40news-spur.riddles.org.uk
* Restore erroneously removed ONLY from PK checkAlvaro Herrera2018-04-04
| | | | | | | | | This is a blind fix, since I don't have SE-Linux to verify it. Per unwanted change in rhinoceros, running sepgsql tests. Noted by Tom Lane. Discussion: https://postgr.es/m/32347.1522865050@sss.pgh.pa.us
* Rewrite pg_dump TAP testsStephen Frost2018-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reworks how the tests to run are defined. Instead of having to define all runs for all tests, we define those tests which should pass (generally using one of the defined broad hashes), add in any which should be specific for this test, and exclude any specific runs that shouldn't pass for this test. This ends up removing some 4k+ lines (more than half the file) but, more importantly, greatly simplifies the way runs-to-be-tested are defined. As discussed in the updated comments, for example, take the test which does CREATE TABLE test_table. That CREATE TABLE should show up in all 'full' runs of pg_dump, except those cases where 'test_table' is excluded, of course, and that's exactly how the test gets defined now (modulo a few other related cases, like where we dump only that table, or we dump the schema it's in, or we exclude the schema it's in): like => { %full_runs, %dump_test_schema_runs, only_dump_test_table => 1, section_pre_data => 1, }, unlike => { exclude_dump_test_schema => 1, exclude_test_table => 1, }, }, Next, we no longer expect every run to be listed for every test. If a run is listed in 'like' (directly or through a hash) then it's a 'like', unless it's listed in 'unlike' in which case it's an 'unlike'. If it isn't listed in either, then it's considered an 'unlike' automatically. Lastly, this changes the code to no longer use like/unlike but rather to use 'ok()' with 'diag()' which allows much more control over what gets spit out to the screen. Gone are the days of the entire dump being sent to the console, now you'll just get a couple of lines for each failing test which say the test that failed and the run that it failed on. This covers both the pg_dump TAP tests in src/bin/pg_dump and those in src/test/modules/test_pg_dump.
* Improve FSM management for BRIN indexes.Tom Lane2018-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BRIN indexes like to propagate additions of free space into the upper pages of their free space maps as soon as the new space is known, even when it's just on one individual index page. Previously this required calling FreeSpaceMapVacuum, which is quite an expensive thing if the map is large. Use the FreeSpaceMapVacuumRange function recently added by commit c79f6df75 to reduce the amount of work done for this purpose. Fix a couple of places that neglected to do the upper-page vacuuming at all after recording new free space. If the policy is to be that BRIN should do that, it should do it everywhere. Do RecordPageWithFreeSpace unconditionally in brin_page_cleanup, and do FreeSpaceMapVacuum unconditionally in brin_vacuum_scan. Because of the FSM's imprecise storage of free space, the old complications here seldom bought anything, they just slowed things down. This approach also provides a predictable path for FSM corruption to be repaired. Remove premature RecordPageWithFreeSpace call in brin_getinsertbuffer where it's about to return an extended page to the caller. The caller should do that, instead, after it's inserted its new tuple. Fix the one caller that forgot to do so. Simplify logic in brin_doupdate's same-page-update case by postponing brin_initialize_empty_new_buffer to after the critical section; I see little point in doing it before. Avoid repeat calls of RelationGetNumberOfBlocks in brin_vacuum_scan. Avoid duplicate BufferGetBlockNumber and BufferGetPage calls in a couple of places where we already had the right values. Move a BRIN_elog debug logging call out of a critical section; that's pretty unsafe and I don't think it buys us anything to not wait till after the critical section. Move the "*extended = false" step in brin_getinsertbuffer into the routine's main loop. There's no actual bug there, since the loop can't iterate with *extended still true, but it doesn't seem very future-proof as coded; and it's certainly not documented as a loop invariant. This is all from follow-on investigation inspired by commit c79f6df75. Discussion: https://postgr.es/m/5801.1522429460@sss.pgh.pa.us
* Foreign keys on partitioned tablesAlvaro Herrera2018-04-04
| | | | | | Author: Álvaro Herrera Discussion: https://postgr.es/m/20171231194359.cvojcour423ulha4@alvherre.pgsql Reviewed-by: Peter Eisentraut
* Skip full index scan during cleanup of B-tree indexes when possibleTeodor Sigaev2018-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete calls and one amvacuumcleanup call. When workload on particular table is append-only, then autovacuum isn't intended to touch this table. However, user may run vacuum manually in order to fill visibility map and get benefits of index-only scans. Then ambulkdelete wouldn't be called for indexes of such table (because no heap tuples were deleted), only amvacuumcleanup would be called In this case, amvacuumcleanup would perform full index scan for two objectives: put recyclable pages into free space map and update index statistics. This patch allows btvacuumclanup to skip full index scan when two conditions are satisfied: no pages are going to be put into free space map and index statistics isn't stalled. In order to check first condition, we store oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then there are some recyclable pages. In order to check second condition we store number of heap tuples observed during previous full index scan by cleanup. If fraction of newly inserted tuples is less than vacuum_cleanup_index_scale_factor, then statistics isn't considered to be stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default). This patch bumps B-tree meta-page version. Upgrade of meta-page is performed "on the fly": during VACUUM meta-page is rewritten with new version. No special handling in pg_upgrade is required. Author: Masahiko Sawada, Alexander Korotkov Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
* Fix the new ARMv8 CRC code for short and unaligned input.Heikki Linnakangas2018-04-04
| | | | | | The code before the main loop, to handle the possible 1-7 unaligned bytes at the beginning of the input, was broken, and read past the input, if the the input was very short.
* Fix pg_bsaebackup checksum testsMagnus Hagander2018-04-04
| | | | | | | | | | Hopefully fix the fact that these checks are unstable, by introducing the corruption in a separate table from pg_class, and also explicitly disable autovacuum on those tables. Also make sure PostgreSQL is stopped while the corruption is introduced to avoid possible caching effects. Author: Michael Banck
* Use ARMv8 CRC instructions where available.Heikki Linnakangas2018-04-04
| | | | | | | | | | | | | | | | | | | | | | | | ARMv8 introduced special CPU instructions for calculating CRC-32C. Use them, when available, for speed. Like with the similar Intel CRC instructions, several factors affect whether the instructions can be used. The compiler intrinsics for them must be supported by the compiler, and the instructions must be supported by the target architecture. If the compilation target architecture does not support the instructions, but adding "-march=armv8-a+crc" makes them available, then we compile the code with a runtime check to determine if the host we're running on supports them or not. For the runtime check, use glibc getauxval() function. Unfortunately, that's not very portable, but I couldn't find any more portable way to do it. If getauxval() is not available, the CRC instructions will still be used if the target architecture supports them without any additional compiler flags, but the runtime check will not be available. Original patch by Yuqi Gu, heavily modified by me. Reviewed by Andres Freund, Thomas Munro. Discussion: https://www.postgresql.org/message-id/HE1PR0801MB1323D171938EABC04FFE7FA9E3110%40HE1PR0801MB1323.eurprd08.prod.outlook.com
* Also fix the descriptions in pg_config.h.win32.Heikki Linnakangas2018-04-04
| | | | | I missed pg_config.h.win32 in the previous commit that fixed these in pg_config.h.in.
* Fix incorrect description of USE_SLICING_BY_8_CRC32C.Heikki Linnakangas2018-04-04
| | | | | And a typo in the description of USE_SSE42_CRC32C_WITH_RUNTIME_CHECK, spotted by Daniel Gustafsson.
* Don't clone internal triggers to partitionsAlvaro Herrera2018-04-03
| | | | | | | | | | | | | | Trigger cloning to partitions was supposed to occur for user-visible triggers only, but during development the protection that prevented it from occurring to internal triggers was lost. Reinstate it, as well as add a test case to ensure internal triggers (in the tested case, triggers implementing a deferred unique constraint) are not cloned. Without the code fix, the partitions in the test end up with different numbers of triggers, which is clearly wrong ... Bug in 86f575948c77. Discussion: https://postgr.es/m/20180403214903.ozfagwjcpk337uw7@alvherre.pgsql
* Fix GCC 7 snprintf() compiler warning.Andres Freund2018-04-03
| | | | | | | | | | | | | | Make buffer 1 byte larger to fit a sign. It's actually impossible for there to be a sign in practice, but this is still required to keep GCC 7 happy. Cleanup from commit 51bc271790eb234a1ba4d14d3e6530f70de92ab5. Based on a suggestion from Peter Eisentraut. Author: Peter Geoghegan Reported-By: Peter Eisentraut Discussion: https://postgr.es/m/d1cc82ed-d07d-cef2-7c00-2e987f121648@2ndquadrant.com
* Pass correct TupDesc to ri_NullCheck() in AssertAlvaro Herrera2018-04-03
| | | | | | | | | | | | | Previous coding was passing the wrong table's tuple descriptor, which accidentally fails to fail because no existing test case exercises a foreign key in which the referenced attributes are further to the right of the referencing attributes. Add a test so that further breakage is visible. This got broken in 16828d5c0273. Discussion: https://postgr.es/m/20180403204723.fqte755nukgm42uf@alvherre.pgsql
* Prevent accidental linking of system-supplied copies of libpq.so etc.Tom Lane2018-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were being careless in some places about the order of -L switches in link command lines, such that -L switches referring to external directories could come before those referring to directories within the build tree. This made it possible to accidentally link a system-supplied library, for example /usr/lib/libpq.so, in place of the one built in the build tree. Hilarity ensued, the more so the older the system-supplied library is. To fix, break LDFLAGS into two parts, a sub-variable LDFLAGS_INTERNAL and the main LDFLAGS variable, both of which are "recursively expanded" so that they can be incrementally adjusted by different makefiles. Establish a policy that -L switches for directories in the build tree must always be added to LDFLAGS_INTERNAL, while -L switches for external directories must always be added to LDFLAGS. This is sufficient to ensure a safe search order. For simplicity, we typically also put -l switches for the respective libraries into those same variables. (Traditional make usage would have us put -l switches into LIBS, but cleaning that up is a project for another day, as there's no clear need for it.) This turns out to also require separating SHLIB_LINK into two variables, SHLIB_LINK and SHLIB_LINK_INTERNAL, with a similar rule about which switches go into which variable. And likewise for PG_LIBS. Although this change might appear to affect external users of pgxs.mk, I think it doesn't; they shouldn't have any need to touch the _INTERNAL variables. In passing, tweak src/common/Makefile so that the value of CPPFLAGS recorded in pg_config lacks "-DFRONTEND" and the recorded value of LDFLAGS lacks "-L../../../src/common". Both of those things are mistakes, apparently introduced during prior code rearrangements, as old versions of pg_config don't print them. In general we don't want anything that's specific to the src/common subdirectory to appear in those outputs. This is certainly a bug fix, but in view of the lack of field complaints, I'm unsure whether it's worth the risk of back-patching. In any case it seems wise to see what the buildfarm makes of it first. Discussion: https://postgr.es/m/25214.1522604295@sss.pgh.pa.us
* C comment: mention null handling in BuildTupleFromCStrings()Bruce Momjian2018-04-03
| | | | | | Discussion: https://postgr.es/m/CAFjFpRcF-wNbe0w-m3NpkEwr9shmOZ=GoESOzd2Wog9h55J8sA@mail.gmail.com Author: Ashutosh Bapat
* Add prefix operator for TEXT type.Teodor Sigaev2018-04-03
| | | | | | | | | | | | The prefix operator along with SP-GiST indexes can be used as an alternative for LIKE 'word%' commands and it doesn't have a limitation of string/prefix length as B-Tree has. Bump catalog version Author: Ildus Kurbangaliev with some editorization by me Review by: Arthur Zakirov, Alexander Korotkov, and me Discussion: https://www.postgresql.org/message-id/flat/20180202180327.222b04b3@wp.localdomain
* Attempt to fix jsonb_plperl build on WindowsPeter Eisentraut2018-04-03
|
* Properly use INT64_FORMAT in outputMagnus Hagander2018-04-03
| | | | Per buildfarm animal prairiedog, suggestion solution from Tom.
* Fix for checksum validation patchMagnus Hagander2018-04-03
| | | | | | | Reorder the check for non-BLCKSZ size reads to make sure we don't abort sending the file in this case. Missed in the previous commit.
* Validate page level checksums in base backupsMagnus Hagander2018-04-03
| | | | | | | | | | | | | | | When base backups are run over the replication protocol (for example using pg_basebackup), verify the checksums of all data blocks if checksums are enabled. If checksum failures are encountered, log them as warnings but don't abort the backup. This becomes the default behaviour in pg_basebackup (provided checksums are enabled on the server), so add a switch (-k) to disable the checks if necessary. Author: Michael Banck Reviewed-By: Magnus Hagander, David Steele Discussion: https://postgr.es/m/20180228180856.GE13784@nighthawk.caipicrew.dd-dns.de
* Tab completion for MERGESimon Riggs2018-04-03
| | | | Author: Pavan Deolasee
* WITH support in MERGESimon Riggs2018-04-03
| | | | | | Author: Peter Geoghegan Recursive support removed, no tests Docs added by me
* New files for MERGESimon Riggs2018-04-03
|
* MERGE SQL Command following SQL:2016Simon Riggs2018-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MERGE performs actions that modify rows in the target table using a source table or query. MERGE provides a single SQL statement that can conditionally INSERT/UPDATE/DELETE rows a task that would other require multiple PL statements. e.g. MERGE INTO target AS t USING source AS s ON t.tid = s.sid WHEN MATCHED AND t.balance > s.delta THEN UPDATE SET balance = t.balance - s.delta WHEN MATCHED THEN DELETE WHEN NOT MATCHED AND s.delta > 0 THEN INSERT VALUES (s.sid, s.delta) WHEN NOT MATCHED THEN DO NOTHING; MERGE works with regular and partitioned tables, including column and row security enforcement, as well as support for row, statement and transition triggers. MERGE is optimized for OLTP and is parameterizable, though also useful for large scale ETL/ELT. MERGE is not intended to be used in preference to existing single SQL commands for INSERT, UPDATE or DELETE since there is some overhead. MERGE can be used statically from PL/pgSQL. MERGE does not yet support inheritance, write rules, RETURNING clauses, updatable views or foreign tables. MERGE follows SQL Standard per the most recent SQL:2016. Includes full tests and documentation, including full isolation tests to demonstrate the concurrent behavior. This version written from scratch in 2017 by Simon Riggs, using docs and tests originally written in 2009. Later work from Pavan Deolasee has been both complex and deep, leaving the lead author credit now in his hands. Extensive discussion of concurrency from Peter Geoghegan, with thanks for the time and effort contributed. Various issues reported via sqlsmith by Andreas Seltenreich Authors: Pavan Deolasee, Simon Riggs Reviewer: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
* Revert "MERGE SQL Command following SQL:2016"Simon Riggs2018-04-02
| | | | This reverts commit e6597dc3533946b98acba7871bd4ca1f7a3d4c1d.
* Revert "Modified files for MERGE"Simon Riggs2018-04-02
| | | | This reverts commit 354f13855e6381d288dfaa52bcd4f2cb0fd4a5eb.
* Modified files for MERGESimon Riggs2018-04-02
|
* MERGE SQL Command following SQL:2016Simon Riggs2018-04-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MERGE performs actions that modify rows in the target table using a source table or query. MERGE provides a single SQL statement that can conditionally INSERT/UPDATE/DELETE rows a task that would other require multiple PL statements. e.g. MERGE INTO target AS t USING source AS s ON t.tid = s.sid WHEN MATCHED AND t.balance > s.delta THEN UPDATE SET balance = t.balance - s.delta WHEN MATCHED THEN DELETE WHEN NOT MATCHED AND s.delta > 0 THEN INSERT VALUES (s.sid, s.delta) WHEN NOT MATCHED THEN DO NOTHING; MERGE works with regular and partitioned tables, including column and row security enforcement, as well as support for row, statement and transition triggers. MERGE is optimized for OLTP and is parameterizable, though also useful for large scale ETL/ELT. MERGE is not intended to be used in preference to existing single SQL commands for INSERT, UPDATE or DELETE since there is some overhead. MERGE can be used statically from PL/pgSQL. MERGE does not yet support inheritance, write rules, RETURNING clauses, updatable views or foreign tables. MERGE follows SQL Standard per the most recent SQL:2016. Includes full tests and documentation, including full isolation tests to demonstrate the concurrent behavior. This version written from scratch in 2017 by Simon Riggs, using docs and tests originally written in 2009. Later work from Pavan Deolasee has been both complex and deep, leaving the lead author credit now in his hands. Extensive discussion of concurrency from Peter Geoghegan, with thanks for the time and effort contributed. Various issues reported via sqlsmith by Andreas Seltenreich Authors: Pavan Deolasee, Simon Riggs Reviewers: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
* Fix some dubious WAL-parsing code.Tom Lane2018-04-02
| | | | | | | | | | Coverity complained about possible buffer overrun in two places added by commit 1eb6d6527, and AFAICS it's reasonable to worry: even granting that the WAL originator properly truncated the commit GID to GIDSIZE, we should not really bet our lives on that having the same value as it does in the current build. Hence, use strlcpy() not strcpy(), and adjust the pointer advancement logic to be sure we skip over the whole source string even if strlcpy() truncated it.
* psql: Fix \ef, \sf tab completionPeter Eisentraut2018-04-02
| | | | | | \ef and \sf take any kind of routine, not just normal functions. Author: Pavel Stehule <pavel.stehule@gmail.com>
* Make be-secure-common.c more consistent for future SSL implementationsPeter Eisentraut2018-04-02
| | | | | | | | | | | | | | | | | Recent commit 8a3d9425 has introduced be-secure-common.c, which is aimed at including backend-side APIs that can be used by any SSL implementation. The purpose is similar to fe-secure-common.c for the frontend-side APIs. However, this has forgotten to include check_ssl_key_file_permissions() in the move, which causes a double dependency between be-secure.c and be-secure-openssl.c. Refactor the code in a more logical way. This also puts into light an API which is usable by future SSL implementations for permissions on SSL key files. Author: Michael Paquier <michael@paquier.xyz>
* postgres_fdw: Push down partition-wise aggregation.Robert Haas2018-04-02
| | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 7012b132d07c2b4ea15b0b3cb1ea9f3278801d98, postgres_fdw has been able to push down the toplevel aggregation operation to the remote server. Commit e2f1eb0ee30d144628ab523432320f174a2c8966 made it possible to break down the toplevel aggregation into one aggregate per partition. This commit lets postgres_fdw push down aggregation in that case just as it does at the top level. In order to make this work, this commit adds an additional argument to the GetForeignUpperPaths FDW API. A matching argument is added to the signature for create_upper_paths_hook. Third-party code using either of these will need to be updated. Also adjust create_foreignscan_plan() so that it picks up the correct set of relids in this case. Jeevan Chalke, reviewed by Ashutosh Bapat and by me and with some adjustments by me. The larger patch series of which this patch is a part was also reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal Legrand, and Rafia Sabih. Discussion: http://postgr.es/m/CAM2+6=V64_xhstVHie0Rz=KPEQnLJMZt_e314P0jaT_oJ9MR8A@mail.gmail.com Discussion: http://postgr.es/m/CAM2+6=XPWujjmj5zUaBTGDoB38CemwcPmjkRy0qOcsQj_V+2sQ@mail.gmail.com
* Fix a boatload of typos in C comments.Tom Lane2018-04-01
| | | | | | Justin Pryzby Discussion: https://postgr.es/m/20180331105640.GK28454@telsasoft.com
* Fix non-portable use of round().Andres Freund2018-03-31
| | | | | | | | | | | | | | | round() is from C99. Use rint() instead. There are behavioral differences between round() and rint(), but they should not matter to the Bloom filter optimal_k() function. We already assume POSIX behavior for rint(), so there is no question of rint() not using "rounds towards nearest" as its rounding mode. Cleanup from commit 51bc271790eb234a1ba4d14d3e6530f70de92ab5. Per buildfarm member thrips. Author: Peter Geoghegan Discussion: https://postgr.es/m/CAH2-Wzn76eCGUonARy-wrVtMHsf+4cvbK_oJAWTLfORTU5ki0w@mail.gmail.com
* Add Bloom filter implementation.Andres Freund2018-03-31
| | | | | | | | | | | | | | | | | | | | | | | | | A Bloom filter is a space-efficient, probabilistic data structure that can be used to test set membership. Callers will sometimes incur false positives, but never false negatives. The rate of false positives is a function of the total number of elements and the amount of memory available for the Bloom filter. Two classic applications of Bloom filters are cache filtering, and data synchronization testing. Any user of Bloom filters must accept the possibility of false positives as a cost worth paying for the benefit in space efficiency. This commit adds a test harness extension module, test_bloomfilter. It can be used to get a sense of how the Bloom filter implementation performs under varying conditions. This is infrastructure for the upcoming "heapallindexed" amcheck patch, which verifies the consistency of a heap relation against one of its indexes. Author: Peter Geoghegan Reviewed-By: Andrey Borodin, Michael Paquier, Thomas Munro, Andres Freund Discussion: https://postgr.es/m/CAH2-Wzm5VmG7cu1N-H=nnS57wZThoSDQU+F5dewx3o84M+jY=g@mail.gmail.com
* Small cleanups in fast default code.Andrew Dunstan2018-04-01
| | | | Problems identified by Andres Freund and Haribabu Kommi
* Fix assorted issues in parallel vacuumdb.Tom Lane2018-03-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid storing the result of PQsocket() in a pgsocket variable; it's declared as int, and the no-socket test is properly written as "x < 0" not "x == PGINVALID_SOCKET". This accidentally had no bad effect because we never got to init_slot() with a bad connection, but it's still wrong. Actually, it seems like we should avoid storing the result for a long period at all. The function's not so expensive that it's worth avoiding, and the existing coding technique here would fail if anyone tried to PQreset the connection during the life of the program. Hence, just re-call PQsocket every time we construct a select(2) mask. Speaking of select(), GetIdleSlot imagined that it could compute the select mask once and continue to use it over multiple calls to select_loop(), which is pretty bogus since that would stomp on the mask on return. This could only matter if the function's outer loop iterated more than once, which is unlikely (it'd take some connection receiving data, but not enough to complete its command). But if it did happen, we'd acquire "tunnel vision" and stop watching the other connections for query termination, with the effect of losing parallelism. Another way in which GetIdleSlot could lose parallelism is that once PQisBusy returns false, it would lock in on that connection and do PQgetResult until that returns NULL; in some cases that could result in blocking. (Perhaps this can never happen in vacuumdb due to the limited set of commands that it can issue, but I'm not quite sure of that, and even if true today it's not a future-proof assumption.) Refactor the code to do that properly, so that it risks blocking in PQgetResult only in cases where we need to wait anyway. Another loss-of-parallelism problem, which *is* easily demonstrable, is that any setup queries issued during prepare_vacuum_command() were always issued on the last-to-be-created connection, whether or not that was idle. Long-running operations on that connection thus prevented issuance of additional operations on the other ones, except in the limited cases where no preparatory query was needed. Instead, wait till we've identified a free connection and use that one. Also, avoid core dump due to undersized malloc request in the case that no tables are identified to be vacuumed. The bogus no-socket test was noted by CharSyam, the other problems identified in my own code review. Back-patch to 9.5 where parallel vacuumdb was introduced. Discussion: https://postgr.es/m/CAMrLSE6etb33-192DTEUGkV-TsvEcxtBDxGWG1tgNOMnQHwgDA@mail.gmail.com
* Fix portability and translatability issues in commit 64f85894a.Tom Lane2018-03-31
| | | | | | | | | | Compilation failed for lack of an #ifdef on builds without pg_strong_random(). Also fix relevant error messages to meet project style guidelines. Fabien Coelho, further adjusted by me Discussion: https://postgr.es/m/32390.1522464534@sss.pgh.pa.us
* Portability fix for commit 9a895462d.Tom Lane2018-03-30
| | | | | | | | So far as I can find, NI_MAXHOST isn't actually required anywhere by POSIX. Nonetheless, commit 9a895462d supposed that it could rely on having that symbol without any ceremony at all. We do have a hack for providing it if the platform doesn't, in getaddrinfo.h, so fix the problem by #including that file. Per buildfarm.
* Remove PARTIAL_LINKING build mode.Andres Freund2018-03-30
| | | | | | | | | | | | | In 9956ddc19164b02dc1925fb389a1af77472eba5e, ten years ago, the current objfile.txt based linking model was introduced. It's time to retire the old SUBSYS.o based model. This primarily is pertinent because the bitcode files for LLVM based inlining are not produced when using PARTIAL_LINKING. It does not seem worth to fix PARTIAL_LINKING to support that. Author: Andres Freund Discussion: https://postgr.es/m/20180121204356.d5oeu34jetqhmdv2@alap3.anarazel.de