diff options
author | Peter Geoghegan <pg@bowt.ie> | 2025-03-05 09:36:48 -0500 |
---|---|---|
committer | Peter Geoghegan <pg@bowt.ie> | 2025-03-05 09:36:48 -0500 |
commit | 5ead85fbc81162ab1594f656b036a22e814f96b3 (patch) | |
tree | 2ad8afc5c2709eb41eca0e1b8176e729adabdcc2 /doc/src | |
parent | 635f580120b99f6df71d7c12749b22acde61c5ad (diff) | |
download | postgresql-5ead85fbc81162ab1594f656b036a22e814f96b3.tar.gz postgresql-5ead85fbc81162ab1594f656b036a22e814f96b3.zip |
Show index search count in EXPLAIN ANALYZE.
Expose the count of index searches/index descents in EXPLAIN ANALYZE's
output for index scan nodes. This information is particularly useful
with scans that use ScalarArrayOp quals, where the number of index scans
isn't predictable in advance (at least not with optimizations like the
one added to nbtree by Postgres 17 commit 5bf748b8). It will also be
useful when EXPLAIN ANALYZE shows details of an nbtree index scan that
uses skip scan optimizations set to be introduced by an upcoming patch.
The instrumentation works by teaching index AMs to increment a new
nsearches counter whenever a new index search begins. The counter is
incremented at exactly the same point that index AMs must already
increment the index's pg_stat_*_indexes.idx_scan counter (we're counting
the same event, but at the scan level rather than the relation level).
The new counter is stored in the scan descriptor (IndexScanDescData),
which explain.c reaches by going through the scan node's PlanState.
This approach doesn't match the approach used when tracking other index
scan specific costs (e.g., "Rows Removed by Filter:"). It is similar to
the approach used in other cases where we must track costs that are only
readily accessible inside an access method, and not from the executor
(e.g., "Heap Blocks:" output for a Bitmap Heap Scan). It is inherently
necessary to maintain a counter that can be incremented multiple times
during a single amgettuple call (or amgetbitmap call), and directly
exposing PlanState.instrument to index access methods seems unappealing.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Reviewed-By: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wz=PKR6rB7qbx+Vnd7eqeB5VTcrW=iJvAsTsKbdG+kW_UA@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com
Diffstat (limited to 'doc/src')
-rw-r--r-- | doc/src/sgml/bloom.sgml | 7 | ||||
-rw-r--r-- | doc/src/sgml/monitoring.sgml | 28 | ||||
-rw-r--r-- | doc/src/sgml/perform.sgml | 60 | ||||
-rw-r--r-- | doc/src/sgml/ref/explain.sgml | 3 | ||||
-rw-r--r-- | doc/src/sgml/rules.sgml | 2 |
5 files changed, 91 insertions, 9 deletions
diff --git a/doc/src/sgml/bloom.sgml b/doc/src/sgml/bloom.sgml index 663a0a4a681..ec5d077679b 100644 --- a/doc/src/sgml/bloom.sgml +++ b/doc/src/sgml/bloom.sgml @@ -173,10 +173,11 @@ CREATE INDEX Buffers: shared hit=21864 -> Bitmap Index Scan on bloomidx (cost=0.00..178436.00 rows=1 width=0) (actual time=20.005..20.005 rows=2300.00 loops=1) Index Cond: ((i2 = 898732) AND (i5 = 123451)) + Index Searches: 1 Buffers: shared hit=19608 Planning Time: 0.099 ms Execution Time: 22.632 ms -(10 rows) +(11 rows) </programlisting> </para> @@ -208,13 +209,15 @@ CREATE INDEX Buffers: shared hit=6 -> Bitmap Index Scan on btreeidx5 (cost=0.00..4.52 rows=11 width=0) (actual time=0.026..0.026 rows=7.00 loops=1) Index Cond: (i5 = 123451) + Index Searches: 1 Buffers: shared hit=3 -> Bitmap Index Scan on btreeidx2 (cost=0.00..4.52 rows=11 width=0) (actual time=0.007..0.007 rows=8.00 loops=1) Index Cond: (i2 = 898732) + Index Searches: 1 Buffers: shared hit=3 Planning Time: 0.264 ms Execution Time: 0.047 ms -(13 rows) +(15 rows) </programlisting> Although this query runs much faster than with either of the single indexes, we pay a penalty in index size. Each of the single-column diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 16646f560e8..fd9bdd88472 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -4234,16 +4234,32 @@ description | Waiting for a newly initialized WAL file to reach durable storage <note> <para> - Queries that use certain <acronym>SQL</acronym> constructs to search for - rows matching any value out of a list or array of multiple scalar values - (see <xref linkend="functions-comparisons"/>) perform multiple - <quote>primitive</quote> index scans (up to one primitive scan per scalar - value) during query execution. Each internal primitive index scan - increments <structname>pg_stat_all_indexes</structname>.<structfield>idx_scan</structfield>, + Index scans may sometimes perform multiple index searches per execution. + Each index search increments <structname>pg_stat_all_indexes</structname>.<structfield>idx_scan</structfield>, so it's possible for the count of index scans to significantly exceed the total number of index scan executor node executions. </para> + <para> + This can happen with queries that use certain <acronym>SQL</acronym> + constructs to search for rows matching any value out of a list or array of + multiple scalar values (see <xref linkend="functions-comparisons"/>). It + can also happen to queries with a + <literal><replaceable>column_name</replaceable> = + <replaceable>value1</replaceable> OR + <replaceable>column_name</replaceable> = + <replaceable>value2</replaceable> ...</literal> construct, though only + when the optimizer transforms the construct into an equivalent + multi-valued array representation. + </para> </note> + <tip> + <para> + <command>EXPLAIN ANALYZE</command> outputs the total number of index + searches performed by each index scan node. See + <xref linkend="using-explain-analyze"/> for an example demonstrating how + this works. + </para> + </tip> </sect2> diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index be4b49f62b5..684eccc097b 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -729,9 +729,11 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; Buffers: shared hit=3 read=5 written=4 -> Bitmap Index Scan on tenk1_unique1 (cost=0.00..4.36 rows=10 width=0) (actual time=0.004..0.004 rows=10.00 loops=1) Index Cond: (unique1 < 10) + Index Searches: 1 Buffers: shared hit=2 -> Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.29..7.90 rows=1 width=244) (actual time=0.003..0.003 rows=1 loops=10) Index Cond: (unique2 = t1.unique2) + Index Searches: 10 Buffers: shared hit=24 read=6 Planning: Buffers: shared hit=15 dirtied=9 @@ -790,6 +792,7 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2 ORDER BY t1.fivethous; Buffers: shared hit=92 -> Bitmap Index Scan on tenk1_unique1 (cost=0.00..5.04 rows=100 width=0) (actual time=0.013..0.013 rows=100.00 loops=1) Index Cond: (unique1 < 100) + Index Searches: 1 Buffers: shared hit=2 Planning: Buffers: shared hit=12 @@ -806,6 +809,58 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2 ORDER BY t1.fivethous; </para> <para> + Index Scan nodes (as well as Bitmap Index Scan and Index-Only Scan nodes) + show an <quote>Index Searches</quote> line that reports the total number + of searches across <emphasis>all</emphasis> node + executions/<literal>loops</literal>: + +<screen> +EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE thousand IN (1, 500, 700, 999); + QUERY PLAN +-------------------------------------------------------------------&zwsp;--------------------------------------------------------- + Bitmap Heap Scan on tenk1 (cost=9.45..73.44 rows=40 width=244) (actual time=0.012..0.028 rows=40.00 loops=1) + Recheck Cond: (thousand = ANY ('{1,500,700,999}'::integer[])) + Heap Blocks: exact=39 + Buffers: shared hit=47 + -> Bitmap Index Scan on tenk1_thous_tenthous (cost=0.00..9.44 rows=40 width=0) (actual time=0.009..0.009 rows=40.00 loops=1) + Index Cond: (thousand = ANY ('{1,500,700,999}'::integer[])) + Index Searches: 4 + Buffers: shared hit=8 + Planning Time: 0.037 ms + Execution Time: 0.034 ms +</screen> + + Here we see a Bitmap Index Scan node that needed 4 separate index + searches. The scan had to search the index from the + <structname>tenk1_thous_tenthous</structname> index root page once per + <type>integer</type> value from the predicate's <literal>IN</literal> + construct. However, the number of index searches often won't have such a + simple correspondence to the query predicate: + +<screen> +EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE thousand IN (1, 2, 3, 4); + QUERY PLAN +---------------------------------------------------------------------------------------------------------------------------------- + Bitmap Heap Scan on tenk1 (cost=9.45..73.44 rows=40 width=244) (actual time=0.009..0.019 rows=40.00 loops=1) + Recheck Cond: (thousand = ANY ('{1,2,3,4}'::integer[])) + Heap Blocks: exact=38 + Buffers: shared hit=40 + -> Bitmap Index Scan on tenk1_thous_tenthous (cost=0.00..9.44 rows=40 width=0) (actual time=0.005..0.005 rows=40.00 loops=1) + Index Cond: (thousand = ANY ('{1,2,3,4}'::integer[])) + Index Searches: 1 + Buffers: shared hit=2 + Planning Time: 0.029 ms + Execution Time: 0.026 ms +</screen> + + This variant of our <literal>IN</literal> query performed only 1 index + search. It spent less time traversing the index (compared to the original + query) because its <literal>IN</literal> construct uses values matching + index tuples stored next to each other, on the same + <structname>tenk1_thous_tenthous</structname> index leaf page. + </para> + + <para> Another type of extra information is the number of rows removed by a filter condition: @@ -861,6 +916,7 @@ EXPLAIN ANALYZE SELECT * FROM polygon_tbl WHERE f1 @> polygon '(0.5,2.0)'; Index Scan using gpolygonind on polygon_tbl (cost=0.13..8.15 rows=1 width=85) (actual time=0.074..0.074 rows=0.00 loops=1) Index Cond: (f1 @> '((0.5,2))'::polygon) Rows Removed by Index Recheck: 1 + Index Searches: 1 Buffers: shared hit=1 Planning Time: 0.039 ms Execution Time: 0.098 ms @@ -894,8 +950,10 @@ EXPLAIN (ANALYZE, BUFFERS OFF) SELECT * FROM tenk1 WHERE unique1 < 100 AND un -> BitmapAnd (cost=25.07..25.07 rows=10 width=0) (actual time=0.100..0.101 rows=0.00 loops=1) -> Bitmap Index Scan on tenk1_unique1 (cost=0.00..5.04 rows=100 width=0) (actual time=0.027..0.027 rows=100.00 loops=1) Index Cond: (unique1 < 100) + Index Searches: 1 -> Bitmap Index Scan on tenk1_unique2 (cost=0.00..19.78 rows=999 width=0) (actual time=0.070..0.070 rows=999.00 loops=1) Index Cond: (unique2 > 9000) + Index Searches: 1 Planning Time: 0.162 ms Execution Time: 0.143 ms </screen> @@ -923,6 +981,7 @@ EXPLAIN ANALYZE UPDATE tenk1 SET hundred = hundred + 1 WHERE unique1 < 100; Buffers: shared hit=4 read=2 -> Bitmap Index Scan on tenk1_unique1 (cost=0.00..5.04 rows=100 width=0) (actual time=0.031..0.031 rows=100.00 loops=1) Index Cond: (unique1 < 100) + Index Searches: 1 Buffers: shared read=2 Planning Time: 0.151 ms Execution Time: 1.856 ms @@ -1061,6 +1120,7 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 Index Cond: (unique2 > 9000) Filter: (unique1 < 100) Rows Removed by Filter: 287 + Index Searches: 1 Buffers: shared hit=16 Planning Time: 0.077 ms Execution Time: 0.086 ms diff --git a/doc/src/sgml/ref/explain.sgml b/doc/src/sgml/ref/explain.sgml index 7daddf03ef0..9ed1061b7ff 100644 --- a/doc/src/sgml/ref/explain.sgml +++ b/doc/src/sgml/ref/explain.sgml @@ -506,10 +506,11 @@ EXPLAIN ANALYZE EXECUTE query(100, 200); Buffers: shared hit=4 -> Index Scan using test_pkey on test (cost=0.29..10.27 rows=99 width=8) (actual time=0.009..0.025 rows=99.00 loops=1) Index Cond: ((id > 100) AND (id < 200)) + Index Searches: 1 Buffers: shared hit=4 Planning Time: 0.244 ms Execution Time: 0.073 ms -(9 rows) +(10 rows) </programlisting> </para> diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index 1d9924a2a3c..8467d961fd0 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -1046,6 +1046,7 @@ SELECT count(*) FROM words WHERE word = 'caterpiler'; -> Index Only Scan using wrd_word on wrd (cost=0.42..4.44 rows=1 width=0) (actual time=0.039..0.039 rows=0.00 loops=1) Index Cond: (word = 'caterpiler'::text) Heap Fetches: 0 + Index Searches: 1 Planning time: 0.164 ms Execution time: 0.117 ms </programlisting> @@ -1090,6 +1091,7 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; Limit (cost=0.29..1.06 rows=10 width=10) (actual time=187.222..188.257 rows=10.00 loops=1) -> Index Scan using wrd_trgm on wrd (cost=0.29..37020.87 rows=479829 width=10) (actual time=187.219..188.252 rows=10.00 loops=1) Order By: (word <-> 'caterpiler'::text) + Index Searches: 1 Planning time: 0.196 ms Execution time: 198.640 ms </programlisting> |