aboutsummaryrefslogtreecommitdiff
path: root/doc/src/sgml/ref
diff options
context:
space:
mode:
Diffstat (limited to 'doc/src/sgml/ref')
-rw-r--r--doc/src/sgml/ref/alter_subscription.sgml18
-rw-r--r--doc/src/sgml/ref/checkpoint.sgml68
-rw-r--r--doc/src/sgml/ref/create_database.sgml45
-rw-r--r--doc/src/sgml/ref/create_subscription.sgml87
-rw-r--r--doc/src/sgml/ref/create_trigger.sgml8
-rw-r--r--doc/src/sgml/ref/createdb.sgml3
-rw-r--r--doc/src/sgml/ref/pg_basebackup.sgml3
-rw-r--r--doc/src/sgml/ref/pg_dump.sgml38
-rw-r--r--doc/src/sgml/ref/pg_dumpall.sgml127
-rw-r--r--doc/src/sgml/ref/pg_recvlogical.sgml10
-rw-r--r--doc/src/sgml/ref/pg_restore.sgml106
-rw-r--r--doc/src/sgml/ref/pgtesttiming.sgml279
-rw-r--r--doc/src/sgml/ref/psql-ref.sgml9
-rw-r--r--doc/src/sgml/ref/vacuumdb.sgml8
14 files changed, 358 insertions, 451 deletions
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..d48cdc76bd3 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -235,8 +235,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<link linkend="sql-createsubscription-params-with-password-required"><literal>password_required</literal></link>,
<link linkend="sql-createsubscription-params-with-run-as-owner"><literal>run_as_owner</literal></link>,
<link linkend="sql-createsubscription-params-with-origin"><literal>origin</literal></link>,
- <link linkend="sql-createsubscription-params-with-failover"><literal>failover</literal></link>, and
- <link linkend="sql-createsubscription-params-with-two-phase"><literal>two_phase</literal></link>.
+ <link linkend="sql-createsubscription-params-with-failover"><literal>failover</literal></link>,
+ <link linkend="sql-createsubscription-params-with-two-phase"><literal>two_phase</literal></link>, and
+ <link linkend="sql-createsubscription-params-with-retain-dead-tuples"><literal>retain_dead_tuples</literal></link>.
Only a superuser can set <literal>password_required = false</literal>.
</para>
@@ -261,8 +262,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
</para>
<para>
- The <link linkend="sql-createsubscription-params-with-failover"><literal>failover</literal></link>
- and <link linkend="sql-createsubscription-params-with-two-phase"><literal>two_phase</literal></link>
+ The <link linkend="sql-createsubscription-params-with-failover"><literal>failover</literal></link>,
+ <link linkend="sql-createsubscription-params-with-two-phase"><literal>two_phase</literal></link>, and
+ <link linkend="sql-createsubscription-params-with-retain-dead-tuples"><literal>retain_dead_tuples</literal></link>
parameters can only be altered when the subscription is disabled.
</para>
@@ -285,6 +287,14 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
option is changed from <literal>true</literal> to <literal>false</literal>,
the publisher will replicate the transactions again when they are committed.
</para>
+
+ <para>
+ If the <link linkend="sql-createsubscription-params-with-retain-dead-tuples"><literal>retain_dead_tuples</literal></link>
+ option is altered to <literal>false</literal> and no other subscription
+ has this option enabled, the replication slot named
+ <quote><literal>pg_conflict_detection</literal></quote>, created to retain
+ dead tuples for conflict detection, will be dropped.
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/checkpoint.sgml b/doc/src/sgml/ref/checkpoint.sgml
index db011a47d04..cd981cf2cab 100644
--- a/doc/src/sgml/ref/checkpoint.sgml
+++ b/doc/src/sgml/ref/checkpoint.sgml
@@ -21,7 +21,12 @@ PostgreSQL documentation
<refsynopsisdiv>
<synopsis>
-CHECKPOINT
+CHECKPOINT [ ( option [, ...] ) ]
+
+<phrase>where <replaceable class="parameter">option</replaceable> can be one of:</phrase>
+
+ FLUSH_UNLOGGED [ <replaceable class="parameter">boolean</replaceable> ]
+ MODE { FAST | SPREAD }
</synopsis>
</refsynopsisdiv>
@@ -37,15 +42,25 @@ CHECKPOINT
</para>
<para>
- The <command>CHECKPOINT</command> command forces an immediate
+ By default, the <command>CHECKPOINT</command> command forces a fast
checkpoint when the command is issued, without waiting for a
regular checkpoint scheduled by the system (controlled by the settings in
<xref linkend="runtime-config-wal-checkpoints"/>).
+ To request the checkpoint be spread over a longer interval, set the
+ <literal>MODE</literal> option to <literal>SPREAD</literal>.
<command>CHECKPOINT</command> is not intended for use during normal
operation.
</para>
<para>
+ The server may consolidate concurrently requested checkpoints. Such
+ consolidated requests will contain a combined set of options. For example,
+ if one session requests a fast checkpoint and another requests a spread
+ checkpoint, the server may combine those requests and perform one fast
+ checkpoint.
+ </para>
+
+ <para>
If executed during recovery, the <command>CHECKPOINT</command> command
will force a restartpoint (see <xref linkend="wal-configuration"/>)
rather than writing a new checkpoint.
@@ -59,6 +74,55 @@ CHECKPOINT
</refsect1>
<refsect1>
+ <title>Parameters</title>
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>FLUSH_UNLOGGED</literal></term>
+ <listitem>
+ <para>
+ Normally, <command>CHECKPOINT</command> does not flush dirty buffers of
+ unlogged relations. This option, which is disabled by default, enables
+ flushing unlogged relations to disk.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>MODE</literal></term>
+ <listitem>
+ <para>
+ When set to <literal>FAST</literal>, which is the default, the requested
+ checkpoint will be completed as fast as possible, which may result in a
+ significantly higher rate of I/O during the checkpoint.
+ </para>
+ <para>
+ <literal>MODE</literal> can also be set to <literal>SPREAD</literal> to
+ request the checkpoint be spread over a longer interval (controlled via
+ the settings in <xref linkend="runtime-config-wal-checkpoints"/>), like a
+ regular checkpoint scheduled by the system. This can reduce the rate of
+ I/O during the checkpoint.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><replaceable class="parameter">boolean</replaceable></term>
+ <listitem>
+ <para>
+ Specifies whether the selected option should be turned on or off.
+ You can write <literal>TRUE</literal>, <literal>ON</literal>, or
+ <literal>1</literal> to enable the option, and <literal>FALSE</literal>,
+ <literal>OFF</literal>, or <literal>0</literal> to disable it. The
+ <replaceable class="parameter">boolean</replaceable> value can also
+ be omitted, in which case <literal>TRUE</literal> is assumed.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </refsect1>
+
+ <refsect1>
<title>Compatibility</title>
<para>
diff --git a/doc/src/sgml/ref/create_database.sgml b/doc/src/sgml/ref/create_database.sgml
index 4da8aeebb50..3544b15efda 100644
--- a/doc/src/sgml/ref/create_database.sgml
+++ b/doc/src/sgml/ref/create_database.sgml
@@ -150,12 +150,12 @@ CREATE DATABASE <replaceable class="parameter">name</replaceable>
<para>
Sets the default collation order and character classification in the
new database. Collation affects the sort order applied to strings,
- e.g., in queries with <literal>ORDER BY</literal>, as well as the order used in indexes
- on text columns. Character classification affects the categorization
- of characters, e.g., lower, upper, and digit. Also sets the
- associated aspects of the operating system environment,
- <literal>LC_COLLATE</literal> and <literal>LC_CTYPE</literal>. The
- default is the same setting as the template database. See <xref
+ e.g., in queries with <literal>ORDER BY</literal>, as well as the
+ order used in indexes on text columns. Character classification
+ affects the categorization of characters, e.g., lower, upper, and
+ digit. Also sets the <literal>LC_CTYPE</literal> aspect of the
+ operating system environment. The default is the same setting as the
+ template database. See <xref
linkend="collation-managing-create-libc"/> and <xref
linkend="collation-managing-create-icu"/> for details.
</para>
@@ -189,17 +189,16 @@ CREATE DATABASE <replaceable class="parameter">name</replaceable>
<term><replaceable class="parameter">lc_collate</replaceable></term>
<listitem>
<para>
- Sets <literal>LC_COLLATE</literal> in the database server's operating
- system environment. The default is the setting of <xref
- linkend="create-database-locale"/> if specified, otherwise the same
- setting as the template database. See below for additional
- restrictions.
+ If <xref linkend="create-database-locale-provider"/> is
+ <literal>libc</literal>, sets the default collation order to use in
+ the new database, overriding the setting <xref
+ linkend="create-database-locale"/>. Otherwise, this setting is
+ ignored.
</para>
<para>
- If <xref linkend="create-database-locale-provider"/> is
- <literal>libc</literal>, also sets the default collation order to use
- in the new database, overriding the setting <xref
- linkend="create-database-locale"/>.
+ The default is the setting of <xref linkend="create-database-locale"/>
+ if specified, otherwise the same setting as the template database.
+ See below for additional restrictions.
</para>
</listitem>
</varlistentry>
@@ -208,16 +207,18 @@ CREATE DATABASE <replaceable class="parameter">name</replaceable>
<listitem>
<para>
Sets <literal>LC_CTYPE</literal> in the database server's operating
- system environment. The default is the setting of <xref
- linkend="create-database-locale"/> if specified, otherwise the same
- setting as the template database. See below for additional
- restrictions.
+ system environment.
</para>
<para>
If <xref linkend="create-database-locale-provider"/> is
- <literal>libc</literal>, also sets the default character
- classification to use in the new database, overriding the setting
- <xref linkend="create-database-locale"/>.
+ <literal>libc</literal>, sets the default character classification to
+ use in the new database, overriding the setting <xref
+ linkend="create-database-locale"/>.
+ </para>
+ <para>
+ The default is the setting of <xref linkend="create-database-locale"/>
+ if specified, otherwise the same setting as the template database.
+ See below for additional restrictions.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..b8cd15f3280 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -169,7 +169,9 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
<listitem>
<para>
Name of the publisher's replication slot to use. The default is
- to use the name of the subscription for the slot name.
+ to use the name of the subscription for the slot name. The name cannot
+ be <literal>pg_conflict_detection</literal> as it is reserved for the
+ conflict detection.
</para>
<para>
@@ -435,6 +437,89 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
</para>
</listitem>
</varlistentry>
+
+ <varlistentry id="sql-createsubscription-params-with-retain-dead-tuples">
+ <term><literal>retain_dead_tuples</literal> (<type>boolean</type>)</term>
+ <listitem>
+ <para>
+ Specifies whether the information (e.g., dead tuples, commit
+ timestamps, and origins) required for conflict detection on the
+ subscriber is retained. The default is <literal>false</literal>.
+ If set to <literal>true</literal>, a physical replication slot named
+ <quote><literal>pg_conflict_detection</literal></quote> will be
+ created on the subscriber to prevent the conflict information from
+ being removed.
+ </para>
+
+ <para>
+ Note that the information useful for conflict detection is retained
+ only after the creation of the slot. You can verify the existence of
+ this slot by querying <link linkend="view-pg-replication-slots">pg_replication_slots</link>.
+ And even if multiple subscriptions on one node enable this option,
+ only one replication slot will be created. Also,
+ <varname>wal_level</varname> must be set to <literal>replica</literal>
+ or higher to allow the replication slot to be used.
+ </para>
+
+ <caution>
+ <para>
+ Note that the information for conflict detection cannot be purged if
+ the subscription is disabled; thus, the information will accumulate
+ until the subscription is enabled. To prevent excessive accumulation,
+ it is recommended to disable <literal>retain_dead_tuples</literal>
+ if the subscription will be inactive for an extended period.
+ </para>
+
+ <para>
+ Additionally when enabling <literal>retain_dead_tuples</literal> for
+ conflict detection in logical replication, it is important to design the
+ replication topology to balance data retention requirements with
+ overall system performance. This option provides minimal performance
+ overhead when applied appropriately. The following scenarios illustrate
+ effective usage patterns when enabling this option.
+ </para>
+
+ <para>
+ a. Large Tables with Bidirectional Writes:
+ For large tables subject to concurrent writes on both publisher and
+ subscriber nodes, publishers can define row filters when creating
+ publications to segment data. This allows multiple subscriptions
+ to replicate exclusive subsets of the table in parallel, optimizing
+ the throughput.
+ </para>
+
+ <para>
+ b. Write-Enabled Subscribers:
+ If a subscriber node is expected to perform write operations, replication
+ can be structured using multiple publications and subscriptions. By
+ distributing tables across these publications, the workload is spread among
+ several apply workers, improving concurrency and reducing contention.
+ </para>
+
+ <para>
+ c. Read-Only Subscribers:
+ In configurations involving single or multiple publisher nodes
+ performing concurrent write operations, read-only subscriber nodes may
+ replicate changes without seeing a performance impact if it does index
+ scan. However, if the subscriber is impacted due to replication lag or
+ scan performance (say due to sequential scans), it needs to follow one
+ of the two previous strategies to distribute the workload on the
+ subscriber.
+ </para>
+ </caution>
+
+ <para>
+ This option cannot be enabled if the publisher is a physical standby.
+ </para>
+
+ <para>
+ Enabling this option ensures retention of information useful for
+ conflict detection solely for changes occurring locally on the
+ publisher. For the changes originating from different origins,
+ reliable conflict detection cannot be guaranteed.
+ </para>
+ </listitem>
+ </varlistentry>
</variablelist></para>
</listitem>
diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml
index ed6d206ae71..0d8d463479b 100644
--- a/doc/src/sgml/ref/create_trigger.sgml
+++ b/doc/src/sgml/ref/create_trigger.sgml
@@ -197,9 +197,11 @@ CREATE [ OR REPLACE ] [ CONSTRAINT ] TRIGGER <replaceable class="parameter">name
of the rows inserted, deleted, or modified by the current SQL statement.
This feature lets the trigger see a global view of what the statement did,
not just one row at a time. This option is only allowed for
- an <literal>AFTER</literal> trigger that is not a constraint trigger; also, if
- the trigger is an <literal>UPDATE</literal> trigger, it must not specify
- a <replaceable class="parameter">column_name</replaceable> list.
+ an <literal>AFTER</literal> trigger on a plain table (not a foreign table).
+ The trigger should not be a constraint trigger. Also, if the trigger is
+ an <literal>UPDATE</literal> trigger, it must not specify
+ a <replaceable class="parameter">column_name</replaceable> list when using
+ this option.
<literal>OLD TABLE</literal> may only be specified once, and only for a trigger
that can fire on <literal>UPDATE</literal> or <literal>DELETE</literal>; it creates a
transition relation containing the <firstterm>before-images</firstterm> of all rows
diff --git a/doc/src/sgml/ref/createdb.sgml b/doc/src/sgml/ref/createdb.sgml
index 5c4e0465ed9..2ccbe13f390 100644
--- a/doc/src/sgml/ref/createdb.sgml
+++ b/doc/src/sgml/ref/createdb.sgml
@@ -136,7 +136,8 @@ PostgreSQL documentation
<term><option>--lc-collate=<replaceable class="parameter">locale</replaceable></option></term>
<listitem>
<para>
- Specifies the LC_COLLATE setting to be used in this database.
+ Specifies the LC_COLLATE setting to be used in this database (ignored
+ unless the locale provider is <literal>libc</literal>).
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
index 9659f76042c..fecee08b0a5 100644
--- a/doc/src/sgml/ref/pg_basebackup.sgml
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -500,8 +500,9 @@ PostgreSQL documentation
<term><option>--checkpoint={fast|spread}</option></term>
<listitem>
<para>
- Sets checkpoint mode to fast (immediate) or spread (the default)
+ Sets checkpoint mode to fast or spread
(see <xref linkend="backup-lowlevel-base-backup"/>).
+ The default is spread.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index 2ae084b5fa6..0bc7609bdf8 100644
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@ -1355,6 +1355,15 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
+ <term><option>--statistics</option></term>
+ <listitem>
+ <para>
+ Dump statistics.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>--statistics-only</option></term>
<listitem>
<para>
@@ -1441,33 +1450,6 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
- <term><option>--with-data</option></term>
- <listitem>
- <para>
- Dump data. This is the default.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--with-schema</option></term>
- <listitem>
- <para>
- Dump schema (data definitions). This is the default.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--with-statistics</option></term>
- <listitem>
- <para>
- Dump statistics.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
<term><option>-?</option></term>
<term><option>--help</option></term>
<listitem>
@@ -1682,7 +1664,7 @@ CREATE DATABASE foo WITH TEMPLATE template0;
</para>
<para>
- If <option>--with-statistics</option> is specified,
+ If <option>--statistics</option> is specified,
<command>pg_dump</command> will include most optimizer statistics in the
resulting dump file. However, some statistics may not be included, such as
those created explicitly with <xref linkend="sql-createstatistics"/> or
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 8ca68da5a55..364442f00f2 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,10 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
-
- <refpurpose>
- export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
- </refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -36,7 +33,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into an SQL script file or an archive. The output contains
+ of a cluster into one script file. The script file contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -55,17 +52,12 @@ PostgreSQL documentation
</para>
<para>
- Plain text SQL scripts will be written to the standard output. Use the
+ The SQL script will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
<para>
- Archives in other formats will be placed in a directory named using the
- <option>-f</option>/<option>--file</option>, which is required in this case.
- </para>
-
- <para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
database). If you use password authentication it will ask for
@@ -129,86 +121,11 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
<varlistentry>
- <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
- <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
- <listitem>
- <para>
- Specify the format of dump files. In plain format, all the dump data is
- sent in a single text stream. This is the default.
-
- In all other modes, <application>pg_dumpall</application> first creates two files:
- <filename>global.dat</filename> and <filename>map.dat</filename>, in the directory
- specified by <option>--file</option>.
- The first file contains global data, such as roles and tablespaces. The second
- contains a mapping between database oids and names. These files are used by
- <application>pg_restore</application>. Data for individual databases is placed in
- <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
-
- <variablelist>
- <varlistentry>
- <term><literal>d</literal></term>
- <term><literal>directory</literal></term>
- <listitem>
- <para>
- Output directory-format archives for each database,
- suitable for input into pg_restore. The directory
- will have database <type>oid</type> as its name.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><literal>p</literal></term>
- <term><literal>plain</literal></term>
- <listitem>
- <para>
- Output a plain-text SQL script file (the default).
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><literal>c</literal></term>
- <term><literal>custom</literal></term>
- <listitem>
- <para>
- Output a custom-format archive for each database,
- suitable for input into pg_restore. The archive
- will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
- <type>oid</type> of the database.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><literal>t</literal></term>
- <term><literal>tar</literal></term>
- <listitem>
- <para>
- Output a tar-format archive for each database,
- suitable for input into pg_restore. The archive
- will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
- <type>oid</type> of the database.
- </para>
- </listitem>
- </varlistentry>
-
- </variablelist>
-
- Note: see <xref linkend="app-pgdump"/> for details
- of how the various non plain text archives work.
-
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
<listitem>
@@ -689,6 +606,15 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
</varlistentry>
<varlistentry>
+ <term><option>--statistics</option></term>
+ <listitem>
+ <para>
+ Dump statistics.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>--statistics-only</option></term>
<listitem>
<para>
@@ -724,33 +650,6 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
</varlistentry>
<varlistentry>
- <term><option>--with-data</option></term>
- <listitem>
- <para>
- Dump data. This is the default.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--with-schema</option></term>
- <listitem>
- <para>
- Dump schema (data definitions). This is the default.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--with-statistics</option></term>
- <listitem>
- <para>
- Dump statistics.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
<term><option>-?</option></term>
<term><option>--help</option></term>
<listitem>
@@ -961,7 +860,7 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
</para>
<para>
- If <option>--with-statistics</option> is specified,
+ If <option>--statistics</option> is specified,
<command>pg_dumpall</command> will include most optimizer statistics in the
resulting dump file. However, some statistics may not be included, such as
those created explicitly with <xref linkend="sql-createstatistics"/> or
diff --git a/doc/src/sgml/ref/pg_recvlogical.sgml b/doc/src/sgml/ref/pg_recvlogical.sgml
index f68182266a9..263ebdeeab4 100644
--- a/doc/src/sgml/ref/pg_recvlogical.sgml
+++ b/doc/src/sgml/ref/pg_recvlogical.sgml
@@ -53,6 +53,16 @@ PostgreSQL documentation
(<keycombo action="simul"><keycap>Control</keycap><keycap>C</keycap></keycombo>)
or <systemitem>SIGTERM</systemitem> signal.
</para>
+
+ <para>
+ When <application>pg_recvlogical</application> receives
+ a <systemitem>SIGHUP</systemitem> signal, it closes the current output file
+ and opens a new one using the filename specified by
+ the <option>--file</option> option. This allows us to rotate
+ the output file by first renaming the current file and then sending
+ a <systemitem>SIGHUP</systemitem> signal to
+ <application>pg_recvlogical</application>.
+ </para>
</refsect1>
<refsect1>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b649bd3a5ae..261ead15039 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,9 +18,8 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore <productname>PostgreSQL</productname> databases from archives
- created by <application>pg_dump</application> or
- <application>pg_dumpall</application>
+ restore a <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application>
</refpurpose>
</refnamediv>
@@ -39,14 +38,13 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database or cluster from an archive
- created by <xref linkend="app-pgdump"/> or
- <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database from an archive
+ created by <xref linkend="app-pgdump"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database or cluster to the state it was in at the time it was saved. The
- archives also allow <application>pg_restore</application> to
+ database to the state it was in at the time it was saved. The
+ archive files also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive formats are designed to be
+ prior to being restored. The archive files are designed to be
portable across architectures.
</para>
@@ -54,17 +52,10 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database.
- When restoring from a dump made by <application>pg_dumpall</application>,
- each database will be created and then the restoration will be run in that
- database.
-
- Otherwise, when a database name is not specified, a script containing the SQL
- commands necessary to rebuild the database or cluster is created and written
+ the database. Otherwise, a script containing the SQL
+ commands necessary to rebuild the database is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application> or
- <application>pg_dumpall</application>.
-
+ the plain text output format of <application>pg_dump</application>.
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -149,8 +140,6 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
- <option>--create</option> is required when restoring multiple databases
- from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,19 +236,6 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
- <term><option>-g</option></term>
- <term><option>--globals-only</option></term>
- <listitem>
- <para>
- Restore only global objects (roles and tablespaces), no databases.
- </para>
- <para>
- This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
<listitem>
@@ -604,28 +580,6 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
- <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
- <listitem>
- <para>
- Do not restore databases whose name matches
- <replaceable class="parameter">pattern</replaceable>.
- Multiple patterns can be excluded by writing multiple
- <option>--exclude-database</option> switches. The
- <replaceable class="parameter">pattern</replaceable> parameter is
- interpreted as a pattern according to the same rules used by
- <application>psql</application>'s <literal>\d</literal>
- commands (see <xref linkend="app-psql-patterns"/>),
- so multiple databases can also be excluded by writing wildcard
- characters in the pattern. When using wildcards, be careful to
- quote the pattern if needed to prevent shell wildcard expansion.
- </para>
- <para>
- This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
<para>
@@ -862,6 +816,16 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
+ <term><option>--statistics</option></term>
+ <listitem>
+ <para>
+ Output commands to restore statistics, if the archive contains them.
+ This is the default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>--statistics-only</option></term>
<listitem>
<para>
@@ -920,36 +884,6 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
- <term><option>--with-data</option></term>
- <listitem>
- <para>
- Output commands to restore data, if the archive contains them.
- This is the default.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--with-schema</option></term>
- <listitem>
- <para>
- Output commands to restore schema (data definitions), if the archive
- contains them. This is the default.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--with-statistics</option></term>
- <listitem>
- <para>
- Output commands to restore statistics, if the archive contains them.
- This is the default.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
<term><option>-?</option></term>
<term><option>--help</option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pgtesttiming.sgml b/doc/src/sgml/ref/pgtesttiming.sgml
index a5eb3aa25e0..afe6a12be4b 100644
--- a/doc/src/sgml/ref/pgtesttiming.sgml
+++ b/doc/src/sgml/ref/pgtesttiming.sgml
@@ -30,11 +30,23 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_test_timing</application> is a tool to measure the timing overhead
- on your system and confirm that the system time never moves backwards.
+ <application>pg_test_timing</application> is a tool to measure the
+ timing overhead on your system and confirm that the system time never
+ moves backwards. It simply reads the system clock over and over again
+ as fast as it can for a specified length of time, and then prints
+ statistics about the observed differences in successive clock readings.
+ </para>
+ <para>
+ Smaller (but not zero) differences are better, since they imply both
+ more-precise clock hardware and less overhead to collect a clock reading.
Systems that are slow to collect timing data can give less accurate
<command>EXPLAIN ANALYZE</command> results.
</para>
+ <para>
+ This tool is also helpful to determine if
+ the <varname>track_io_timing</varname> configuration parameter is likely
+ to produce useful results.
+ </para>
</refsect1>
<refsect1>
@@ -60,6 +72,21 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
+ <term><option>-c <replaceable class="parameter">cutoff</replaceable></option></term>
+ <term><option>--cutoff=<replaceable class="parameter">cutoff</replaceable></option></term>
+ <listitem>
+ <para>
+ Specifies the cutoff percentage for the list of exact observed
+ timing durations (that is, the changes in the system clock value
+ from one reading to the next). The list will end once the running
+ percentage total reaches or exceeds this value, except that the
+ largest observed duration will always be printed. The default
+ cutoff is 99.99.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-V</option></term>
<term><option>--version</option></term>
<listitem>
@@ -92,205 +119,83 @@ PostgreSQL documentation
<title>Interpreting Results</title>
<para>
- Good results will show most (>90%) individual timing calls take less than
- one microsecond. Average per loop overhead will be even lower, below 100
- nanoseconds. This example from an Intel i7-860 system using a TSC clock
- source shows excellent performance:
-
-<screen><![CDATA[
-Testing timing overhead for 3 seconds.
-Per loop time including overhead: 35.96 ns
-Histogram of timing durations:
- < us % of total count
- 1 96.40465 80435604
- 2 3.59518 2999652
- 4 0.00015 126
- 8 0.00002 13
- 16 0.00000 2
-]]></screen>
+ The first block of output has four columns, with rows showing a
+ shifted-by-one log2(ns) histogram of timing durations (that is, the
+ differences between successive clock readings). This is not the
+ classic log2(n+1) histogram as it counts zeros separately and then
+ switches to log2(ns) starting from value 1.
</para>
-
<para>
- Note that different units are used for the per loop time than the
- histogram. The loop can have resolution within a few nanoseconds (ns),
- while the individual timing calls can only resolve down to one microsecond
- (us).
+ The columns are:
+ <itemizedlist spacing="compact">
+ <listitem>
+ <simpara>nanosecond value that is &gt;= the durations in this
+ bucket</simpara>
+ </listitem>
+ <listitem>
+ <simpara>percentage of durations in this bucket</simpara>
+ </listitem>
+ <listitem>
+ <simpara>running-sum percentage of durations in this and previous
+ buckets</simpara>
+ </listitem>
+ <listitem>
+ <simpara>count of durations in this bucket</simpara>
+ </listitem>
+ </itemizedlist>
</para>
-
- </refsect2>
- <refsect2>
- <title>Measuring Executor Timing Overhead</title>
-
<para>
- When the query executor is running a statement using
- <command>EXPLAIN ANALYZE</command>, individual operations are timed as well
- as showing a summary. The overhead of your system can be checked by
- counting rows with the <application>psql</application> program:
-
-<screen>
-CREATE TABLE t AS SELECT * FROM generate_series(1,100000);
-\timing
-SELECT COUNT(*) FROM t;
-EXPLAIN ANALYZE SELECT COUNT(*) FROM t;
-</screen>
+ The second block of output goes into more detail, showing the exact
+ timing differences observed. For brevity this list is cut off when the
+ running-sum percentage exceeds the user-selectable cutoff value.
+ However, the largest observed difference is always shown.
</para>
-
<para>
- The i7-860 system measured runs the count query in 9.8 ms while
- the <command>EXPLAIN ANALYZE</command> version takes 16.6 ms, each
- processing just over 100,000 rows. That 6.8 ms difference means the timing
- overhead per row is 68 ns, about twice what pg_test_timing estimated it
- would be. Even that relatively small amount of overhead is making the fully
- timed count statement take almost 70% longer. On more substantial queries,
- the timing overhead would be less problematic.
+ The example results below show that 99.99% of timing loops took between
+ 8 and 31 nanoseconds, with the worst case somewhere between 32768 and
+ 65535 nanoseconds. In the second block, we can see that typical loop
+ time is 16 nanoseconds, and the readings appear to have full nanosecond
+ precision.
</para>
- </refsect2>
-
- <refsect2>
- <title>Changing Time Sources</title>
<para>
- On some newer Linux systems, it's possible to change the clock source used
- to collect timing data at any time. A second example shows the slowdown
- possible from switching to the slower acpi_pm time source, on the same
- system used for the fast results above:
-
<screen><![CDATA[
-# cat /sys/devices/system/clocksource/clocksource0/available_clocksource
-tsc hpet acpi_pm
-# echo acpi_pm > /sys/devices/system/clocksource/clocksource0/current_clocksource
-# pg_test_timing
-Per loop time including overhead: 722.92 ns
+Testing timing overhead for 3 seconds.
+Average loop time including overhead: 16.40 ns
Histogram of timing durations:
- < us % of total count
- 1 27.84870 1155682
- 2 72.05956 2990371
- 4 0.07810 3241
- 8 0.01357 563
- 16 0.00007 3
+ <= ns % of total running % count
+ 0 0.0000 0.0000 0
+ 1 0.0000 0.0000 0
+ 3 0.0000 0.0000 0
+ 7 0.0000 0.0000 0
+ 15 4.5452 4.5452 8313178
+ 31 95.4527 99.9979 174581501
+ 63 0.0001 99.9981 253
+ 127 0.0001 99.9982 165
+ 255 0.0000 99.9982 35
+ 511 0.0000 99.9982 1
+ 1023 0.0013 99.9994 2300
+ 2047 0.0004 99.9998 690
+ 4095 0.0000 99.9998 9
+ 8191 0.0000 99.9998 8
+ 16383 0.0002 100.0000 337
+ 32767 0.0000 100.0000 2
+ 65535 0.0000 100.0000 1
+
+Observed timing durations up to 99.9900%:
+ ns % of total running % count
+ 15 4.5452 4.5452 8313178
+ 16 58.3785 62.9237 106773354
+ 17 33.6840 96.6078 61607584
+ 18 3.1151 99.7229 5697480
+ 19 0.2638 99.9867 482570
+ 20 0.0093 99.9960 17054
+...
+ 38051 0.0000 100.0000 1
]]></screen>
</para>
- <para>
- In this configuration, the sample <command>EXPLAIN ANALYZE</command> above
- takes 115.9 ms. That's 1061 ns of timing overhead, again a small multiple
- of what's measured directly by this utility. That much timing overhead
- means the actual query itself is only taking a tiny fraction of the
- accounted for time, most of it is being consumed in overhead instead. In
- this configuration, any <command>EXPLAIN ANALYZE</command> totals involving
- many timed operations would be inflated significantly by timing overhead.
- </para>
-
- <para>
- FreeBSD also allows changing the time source on the fly, and it logs
- information about the timer selected during boot:
-
-<screen>
-# dmesg | grep "Timecounter"
-Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
-Timecounter "i8254" frequency 1193182 Hz quality 0
-Timecounters tick every 10.000 msec
-Timecounter "TSC" frequency 2531787134 Hz quality 800
-# sysctl kern.timecounter.hardware=TSC
-kern.timecounter.hardware: ACPI-fast -> TSC
-</screen>
- </para>
-
- <para>
- Other systems may only allow setting the time source on boot. On older
- Linux systems the "clock" kernel setting is the only way to make this sort
- of change. And even on some more recent ones, the only option you'll see
- for a clock source is "jiffies". Jiffies are the older Linux software clock
- implementation, which can have good resolution when it's backed by fast
- enough timing hardware, as in this example:
-
-<screen><![CDATA[
-$ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
-jiffies
-$ dmesg | grep time.c
-time.c: Using 3.579545 MHz WALL PM GTOD PIT/TSC timer.
-time.c: Detected 2400.153 MHz processor.
-$ pg_test_timing
-Testing timing overhead for 3 seconds.
-Per timing duration including loop overhead: 97.75 ns
-Histogram of timing durations:
- < us % of total count
- 1 90.23734 27694571
- 2 9.75277 2993204
- 4 0.00981 3010
- 8 0.00007 22
- 16 0.00000 1
- 32 0.00000 1
-]]></screen></para>
-
</refsect2>
-
- <refsect2>
- <title>Clock Hardware and Timing Accuracy</title>
-
- <para>
- Collecting accurate timing information is normally done on computers using
- hardware clocks with various levels of accuracy. With some hardware the
- operating systems can pass the system clock time almost directly to
- programs. A system clock can also be derived from a chip that simply
- provides timing interrupts, periodic ticks at some known time interval. In
- either case, operating system kernels provide a clock source that hides
- these details. But the accuracy of that clock source and how quickly it can
- return results varies based on the underlying hardware.
- </para>
-
- <para>
- Inaccurate time keeping can result in system instability. Test any change
- to the clock source very carefully. Operating system defaults are sometimes
- made to favor reliability over best accuracy. And if you are using a virtual
- machine, look into the recommended time sources compatible with it. Virtual
- hardware faces additional difficulties when emulating timers, and there are
- often per operating system settings suggested by vendors.
- </para>
-
- <para>
- The Time Stamp Counter (TSC) clock source is the most accurate one available
- on current generation CPUs. It's the preferred way to track the system time
- when it's supported by the operating system and the TSC clock is
- reliable. There are several ways that TSC can fail to provide an accurate
- timing source, making it unreliable. Older systems can have a TSC clock that
- varies based on the CPU temperature, making it unusable for timing. Trying
- to use TSC on some older multicore CPUs can give a reported time that's
- inconsistent among multiple cores. This can result in the time going
- backwards, a problem this program checks for. And even the newest systems
- can fail to provide accurate TSC timing with very aggressive power saving
- configurations.
- </para>
-
- <para>
- Newer operating systems may check for the known TSC problems and switch to a
- slower, more stable clock source when they are seen. If your system
- supports TSC time but doesn't default to that, it may be disabled for a good
- reason. And some operating systems may not detect all the possible problems
- correctly, or will allow using TSC even in situations where it's known to be
- inaccurate.
- </para>
-
- <para>
- The High Precision Event Timer (HPET) is the preferred timer on systems
- where it's available and TSC is not accurate. The timer chip itself is
- programmable to allow up to 100 nanosecond resolution, but you may not see
- that much accuracy in your system clock.
- </para>
-
- <para>
- Advanced Configuration and Power Interface (ACPI) provides a Power
- Management (PM) Timer, which Linux refers to as the acpi_pm. The clock
- derived from acpi_pm will at best provide 300 nanosecond resolution.
- </para>
-
- <para>
- Timers used on older PC hardware include the 8254 Programmable Interval
- Timer (PIT), the real-time clock (RTC), the Advanced Programmable Interrupt
- Controller (APIC) timer, and the Cyclone timer. These timers aim for
- millisecond resolution.
- </para>
- </refsect2>
</refsect1>
<refsect1>
@@ -298,6 +203,8 @@ Histogram of timing durations:
<simplelist type="inline">
<member><xref linkend="sql-explain"/></member>
+ <member><ulink url="https://wiki.postgresql.org/wiki/Pg_test_timing">Wiki
+ discussion about timing</ulink></member>
</simplelist>
</refsect1>
</refentry>
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 95f4cac2467..4f7b11175c6 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -4623,6 +4623,15 @@ bar
</listitem>
</varlistentry>
+ <varlistentry id="app-psql-variables-servicefile">
+ <term><varname>SERVICEFILE</varname></term>
+ <listitem>
+ <para>
+ The service file name, if applicable.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="app-psql-variables-shell-error">
<term><varname>SHELL_ERROR</varname></term>
<listitem>
diff --git a/doc/src/sgml/ref/vacuumdb.sgml b/doc/src/sgml/ref/vacuumdb.sgml
index b0680a61814..c7d9dca17b8 100644
--- a/doc/src/sgml/ref/vacuumdb.sgml
+++ b/doc/src/sgml/ref/vacuumdb.sgml
@@ -282,9 +282,11 @@ PostgreSQL documentation
<listitem>
<para>
Only analyze relations that are missing statistics for a column, index
- expression, or extended statistics object. This option prevents
- <application>vacuumdb</application> from deleting existing statistics
- so that the query optimizer's choices do not become transiently worse.
+ expression, or extended statistics object. When used with
+ <option>--analyze-in-stages</option>, this option prevents
+ <application>vacuumdb</application> from temporarily replacing existing
+ statistics with ones generated with lower statistics targets, thus
+ avoiding transiently worse query optimizer choices.
</para>
<para>
This option can only be used in conjunction with