aboutsummaryrefslogtreecommitdiff
path: root/src/backend/lib
Commit message (Collapse)AuthorAge
* Remove mergeHyperLogLog.Robert Haas2016-04-27
| | | | | | | | | | It's buggy. If somebody needs this later, they'll need to put back a non-buggy vesion of it. Discussion: CAM3SWZT-i6R9JU5YXa8MJUou2_r3LfGJZpQ9tYa1BYxfkj0=cQ@mail.gmail.com Discussion: CAM3SWZRUOLsYoTT83QgdUy9D8ehYWm_nvbrrfcOOzikiRfFY7g@mail.gmail.com Peter Geoghegan
* Add two HyperLogLog functionsAlvaro Herrera2016-01-19
| | | | | | | | New functions initHyperLogLogError() and freeHyperLogLog() simplify using this module from elsewhere. Author: Tomáš Vondra Review: Peter Geoghegan
* Update copyright for 2016Bruce Momjian2016-01-02
| | | | Backpatch certain files through 9.1
* Avoid use of float arithmetic in bipartite_match.c.Tom Lane2015-08-23
| | | | | | | | | | | | | | | Since the distances used in this algorithm are small integers (not more than the size of the U set, in fact), there is no good reason to use float arithmetic for them. Use short ints instead: they're smaller, faster, and require no special portability assumptions. Per testing by Greg Stark, which disclosed that the code got into an infinite loop on VAX for lack of IEEE-style float infinities. We don't really care all that much whether Postgres can run on a VAX anymore, but there seems sufficient reason to change this code anyway. In passing, make a few other small adjustments to make the code match usual Postgres coding style a bit better.
* Rely on inline functions even if that causes warnings in older compilers.Andres Freund2015-08-05
| | | | | | | | | | | | | | | | | | | | | | | | | So far we have worked around the fact that some very old compilers do not support 'inline' functions by only using inline functions conditionally (or not at all). Since such compilers are very rare by now, we have decided to rely on inline functions from 9.6 onwards. To avoid breaking these old compilers inline is defined away when not supported. That'll cause "function x defined but not used" type of warnings, but since nobody develops on such compilers anymore that's ok. This change in policy will allow us to more easily employ inline functions. I chose to remove code previously conditional on PG_USE_INLINE as it seemed confusing to have code dependent on a define that's always defined. Blacklisting of compilers, like in c53f73879f, now has to be done differently. A platform template can define PG_FORCE_DISABLE_INLINE to force inline to be defined empty. Discussion: 20150701161447.GB30708@awork2.anarazel.de
* Use appendStringInfoString/Char et al where appropriate.Heikki Linnakangas2015-07-02
| | | | | | Patch by David Rowley. Backpatch to 9.5, as some of the calls were new in 9.5, and keeping the code in sync with master makes future backpatching easier.
* pgindent run for 9.5Bruce Momjian2015-05-23
|
* More portability fixing for bipartite_match.c.Tom Lane2015-05-16
| | | | <float.h> is required for isinf() on some platforms. Per buildfarm.
* Avoid direct use of INFINITY.Tom Lane2015-05-15
| | | | It's not very portable. Per buildfarm.
* Support GROUPING SETS, CUBE and ROLLUP.Andres Freund2015-05-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
* Fix a bug in pairing heap removal code.Heikki Linnakangas2015-02-17
| | | | | | | | | | | | | | | | After removal, the next_sibling pointer of a node was sometimes incorrectly left to point to another node in the heap, which meant that a node was sometimes linked twice into the heap. Surprisingly that didn't cause any crashes in my testing, but it was clearly wrong and could easily segfault in other scenarios. Also always keep the prev_or_parent pointer as NULL on the root node. That was not a correctness issue AFAICS, but let's be tidy. Add a debugging function, to dump the contents of a pairing heap as a string. It's #ifdef'd out, as it's not used for anything in any normal code, but it was highly useful in debugging this. Let's keep it handy for further reference.
* Fix typos, update README.Robert Haas2015-01-23
| | | | Peter Geoghegan
* Add an explicit cast to Size to hyperloglog.cRobert Haas2015-01-23
| | | | | | MSVC generates a warning here; we hope this will make it happy. Report by Michael Paquier. Patch by David Rowley.
* Use abbreviated keys for faster sorting of text datums.Robert Haas2015-01-19
| | | | | | | | | | | | | | | | | | | | | This commit extends the SortSupport infrastructure to allow operator classes the option to provide abbreviated representations of Datums; in the case of text, we abbreviate by taking the first few characters of the strxfrm() blob. If the abbreviated comparison is insufficent to resolve the comparison, we fall back on the normal comparator. This can be much faster than the old way of doing sorting if the first few bytes of the string are usually sufficient to resolve the comparison. There is the potential for a performance regression if all of the strings to be sorted are identical for the first 8+ characters and differ only in later positions; therefore, the SortSupport machinery now provides an infrastructure to abort the use of abbreviation if it appears that abbreviation is producing comparatively few distinct keys. HyperLogLog, a streaming cardinality estimator, is included in this commit and used to make that determination for text. Peter Geoghegan, reviewed by me.
* Update copyright for 2015Bruce Momjian2015-01-06
| | | | Backpatch certain files through 9.0
* Move rbtree.c from src/backend/utils/misc to src/backend/lib.Heikki Linnakangas2014-12-22
| | | | | We have other general-purpose data structures in src/backend/lib, so it seems like a better home for the red-black tree as well.
* Use a pairing heap for the priority queue in kNN-GiST searches.Heikki Linnakangas2014-12-22
| | | | | | | This performs slightly better, uses less memory, and needs slightly less code in GiST, than the Red-Black tree previously used. Reviewed by Peter Geoghegan
* Misc comment typo fixes.Heikki Linnakangas2014-12-16
| | | | | Backpatch the applicable parts, just to make backpatching future patches easier.
* pgindent run for 9.4Bruce Momjian2014-05-06
| | | | | This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
* Fix typos in comments.Fujii Masao2014-03-17
| | | | Thom Brown
* Update copyright for 2014Bruce Momjian2014-01-07
| | | | | Update all files in head, and files COPYRIGHT and legal.sgml in all back branches.
* Use improved vsnprintf calling logic in more places.Tom Lane2013-10-24
| | | | | | | | | | | | | | | | | | | | | | When we are using a C99-compliant vsnprintf implementation (which should be most places, these days) it is worth the trouble to make use of its report of how large the buffer needs to be to succeed. This patch adjusts stringinfo.c and some miscellaneous usages in pg_dump to do that, relying on the logic recently added in libpgcommon's psprintf.c. Since these places want to know the number of bytes written once we succeed, modify the API of pvsnprintf() to report that. There remains near-duplicate logic in pqexpbuffer.c, but since that code is in libpq, psprintf.c's approach of exit()-on-error isn't appropriate for use there. Also note that I didn't bother touching the multitude of places that call (v)snprintf without any attempt to provide a resizable buffer. Release-note-worthy incompatibility: the API of appendStringInfoVA() changed. If there's any third-party code that's calling that directly, it will need tweaking along the same lines as in this patch. David Rowley and Tom Lane
* Reset the binary heap in MergeAppend rescans.Tom Lane2013-08-30
| | | | | | | | Failing to do so can cause queries to return wrong data, error out or crash. This requires adding a new binaryheap_reset() method to binaryheap.c, but that probably should have been there anyway. Per bug #8410 from Terje Elde. Diagnosis and patch by Andres Freund.
* Improve ilist.h's support for deletion of slist elements during iteration.Tom Lane2013-07-24
| | | | | | | | | | | | | | | | | | Previously one had to use slist_delete(), implying an additional scan of the list, making this infrastructure considerably less efficient than traditional Lists when deletion of element(s) in a long list is needed. Modify the slist_foreach_modify() macro to support deleting the current element in O(1) time, by keeping a "prev" pointer in addition to "cur" and "next". Although this makes iteration with this macro a bit slower, no real harm is done, since in any scenario where you're not going to delete the current list element you might as well just use slist_foreach instead. Improve the comments about when to use each macro. Back-patch to 9.3 so that we'll have consistent semantics in all branches that provide ilist.h. Note this is an ABI break for callers of slist_foreach_modify(). Andres Freund and Tom Lane
* pgindent run for release 9.3Bruce Momjian2013-05-29
| | | | | This is the first run of the Perl-based pgindent script. Also update pgindent instructions.
* Update copyrights for 2013Bruce Momjian2013-01-01
| | | | | Fully update git head, and update back branches in ./COPYRIGHT and legal.sgml files.
* Basic binary heap implementation.Robert Haas2012-11-29
| | | | | | | | | | There are probably other places where this can be used, but for now, this just makes MergeAppend use it, so that this code will have test coverage. There is other work in the queue that will use this, as well. Abhijit Menon-Sen, reviewed by Andres Freund, Robert Haas, Álvaro Herrera, Tom Lane, and others.
* Code review for inline-list patch.Tom Lane2012-10-18
| | | | | | | Make foreach macros less syntactically dangerous, and fix some typos in evidently-never-tested ones. Add missing slist_next_node and slist_head_node functions. Fix broken dlist_check code. Assorted comment improvements.
* Embedded list interfaceAlvaro Herrera2012-10-17
| | | | | | | | | | | | | | | | | | | Provide a common implementation of embedded singly-linked and doubly-linked lists. "Embedded" in the sense that the nodes' next/previous pointers exist within some larger struct; this design choice reduces memory allocation overhead. Most of the implementation uses inlineable functions (where supported), for performance. Some existing uses of both types of lists have been converted to the new code, for demonstration purposes. Other uses can (and probably will) be converted in the future. Since dllist.c is unused after this conversion, it has been removed. Author: Andres Freund Some tweaks by me Reviewed by Tom Lane, Peter Geoghegan
* Update copyright notices for year 2012.Bruce Momjian2012-01-01
|
* Stamp copyrights for year 2011.Bruce Momjian2011-01-01
|
* Remove cvs keywords from all files.Magnus Hagander2010-09-20
|
* pgindent run for 9.0, second runBruce Momjian2010-07-06
|
* Work around a subtle portability problem in use of printf %s format.Tom Lane2010-05-08
| | | | | | | | | | | | | Depending on which spec you read, field widths and precisions in %s may be counted either in bytes or characters. Our code was assuming bytes, which is wrong at least for glibc's implementation, and in any case libc might have a different idea of the prevailing encoding than we do. Hence, for portable results we must avoid using anything more complex than just "%s" unless the string to be printed is known to be all-ASCII. This patch fixes the cases I could find, including the psql formatting failure reported by Hernan Gonzalez. In HEAD only, I also added comments to some places where it appears safe to continue using "%.*s".
* Update copyright for the year 2010.Bruce Momjian2010-01-02
|
* Assorted minor refactoring in EXPLAIN.Tom Lane2009-07-24
| | | | | | | | | | This is believed to not change the output at all, with one known exception: "Subquery Scan foo" becomes "Subquery Scan on foo". (We can fix that if anyone complains, but it would be a wart, because the old code was clearly inconsistent.) The main intention is to remove duplicate coding and provide a cleaner base for subsequent EXPLAIN patching. Robert Haas
* Update copyright for 2009.Bruce Momjian2009-01-01
|
* Refactor backend makefiles to remove lots of duplicate codePeter Eisentraut2008-02-19
|
* Update copyrights in source tree to 2008.Bruce Momjian2008-01-01
|
* pgindent run for 8.3.Bruce Momjian2007-11-15
|
* Increase the initial size of StringInfo buffers to 1024 bytes (from 256);Tom Lane2007-08-12
| | | | | | | | | likewise increase the initial size of the scanner's literal buffer to 1024 (from 128). Instrumentation of the regression tests suggests that this saves a useful amount of repalloc() traffic --- the number of calls occurring during one set of tests drops from about 6900 to about 3900. The old sizes were chosen in the late 90's with an eye to machines much smaller than are common today.
* Tweak the code in a couple of places to try to deliver more user-friendlyTom Lane2007-05-28
| | | | | error messages when a single COPY line is too long for us to handle. Per example from Johann Spies.
* Remove the currently unused FRONTEND case in dllist.c. This allows the usageAlvaro Herrera2007-03-22
| | | | | of palloc instead of malloc, which means a list can be freed simply by deleting the memory context that contains it.
* Add resetStringInfo(), which clears the content of a StringInfo, andNeil Conway2007-03-03
| | | | | | fixup various places in the tree that were clearing a StringInfo by hand. Making this function a part of the API simplifies client code slightly, and avoids needlessly peeking inside the StringInfo interface.
* Remove remains of old depend target.Peter Eisentraut2007-01-20
|
* Update CVS HEAD for 2007 copyright. Back branches are typically notBruce Momjian2007-01-05
| | | | back-stamped for this.
* Update copyright for 2006. Update scripts.Bruce Momjian2006-03-05
|
* Standard pgindent run for 8.1.Bruce Momjian2005-10-15
|
* Replace the use of "0" with "NULL" where appropriate in dllist.c, forNeil Conway2005-01-18
| | | | good style and to satisfy sparse. From Alvaro Herrera.
* Tag appropriate files for rc3PostgreSQL Daemon2004-12-31
| | | | | | | | Also performed an initial run through of upgrading our Copyright date to extend to 2005 ... first run here was very simple ... change everything where: grep 1996-2004 && the word 'Copyright' ... scanned through the generated list with 'less' first, and after, to make sure that I only picked up the right entries ...