aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorBruce Momjian <bruce@momjian.us>2007-11-08 19:16:30 +0000
committerBruce Momjian <bruce@momjian.us>2007-11-08 19:16:30 +0000
commit621e14dcb280be61d1a942ddb5e98dcc167b4813 (patch)
tree317057452bf52bbe36f2855f79c7974e53516aaf
parent5db1c58a1acf76355fc0e322347b393876f9b83b (diff)
downloadpostgresql-621e14dcb280be61d1a942ddb5e98dcc167b4813.tar.gz
postgresql-621e14dcb280be61d1a942ddb5e98dcc167b4813.zip
Add "High Availability, Load Balancing, and Replication Feature Matrix"
table to docs.
-rw-r--r--doc/src/sgml/high-availability.sgml207
1 files changed, 166 insertions, 41 deletions
diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml
index 974da2c80a0..6bb6046af36 100644
--- a/doc/src/sgml/high-availability.sgml
+++ b/doc/src/sgml/high-availability.sgml
@@ -1,4 +1,4 @@
-<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.17 2007/11/04 19:23:24 momjian Exp $ -->
+<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.18 2007/11/08 19:16:30 momjian Exp $ -->
<chapter id="high-availability">
<title>High Availability, Load Balancing, and Replication</title>
@@ -92,16 +92,23 @@
</para>
<para>
- Shared hardware functionality is common in network storage
- devices. Using a network file system is also possible, though
- care must be taken that the file system has full POSIX behavior.
- One significant limitation of this method is that if the shared
- disk array fails or becomes corrupt, the primary and standby
- servers are both nonfunctional. Another issue is that the
- standby server should never access the shared storage while
+ Shared hardware functionality is common in network storage devices.
+ Using a network file system is also possible, though care must be
+ taken that the file system has full POSIX behavior (see <xref
+ linkend="creating-cluster-nfs">). One significant limitation of this
+ method is that if the shared disk array fails or becomes corrupt, the
+ primary and standby servers are both nonfunctional. Another issue is
+ that the standby server should never access the shared storage while
the primary server is running.
</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>File System Replication</term>
+ <listitem>
+
<para>
A modified version of shared hardware functionality is file system
replication, where all changes to a file system are mirrored to a file
@@ -125,7 +132,7 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
- <term>Warm Standby Using Point-In-Time Recovery</term>
+ <term>Warm Standby Using Point-In-Time Recovery (<acronym>PITR</>)</term>
<listitem>
<para>
@@ -191,6 +198,21 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
+ <term>Asynchronous Multi-Master Replication</term>
+ <listitem>
+
+ <para>
+ For servers that are not regularly connected, like laptops or
+ remote servers, keeping data consistent among servers is a
+ challenge. Using asynchronous multi-master replication, each
+ server works independently, and periodically communicates with
+ the other servers to identify conflicting transactions. The
+ conflicts can be resolved by users or conflict resolution rules.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term>Synchronous Multi-Master Replication</term>
<listitem>
@@ -223,21 +245,6 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
- <term>Asynchronous Multi-Master Replication</term>
- <listitem>
-
- <para>
- For servers that are not regularly connected, like laptops or
- remote servers, keeping data consistent among servers is a
- challenge. Using asynchronous multi-master replication, each
- server works independently, and periodically communicates with
- the other servers to identify conflicting transactions. The
- conflicts can be resolved by users or conflict resolution rules.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
<term>Data Partitioning</term>
<listitem>
@@ -254,23 +261,6 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
- <term>Multi-Server Parallel Query Execution</term>
- <listitem>
-
- <para>
- Many of the above solutions allow multiple servers to handle
- multiple queries, but none allow a single query to use multiple
- servers to complete faster. This solution allows multiple
- servers to work concurrently on a single query. This is usually
- accomplished by splitting the data among servers and having
- each server execute its part of the query and return results
- to a central server where they are combined and returned to
- the user. Pgpool-II has this capability.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
<term>Commercial Solutions</term>
<listitem>
@@ -285,4 +275,139 @@ protocol to make nodes agree on a serializable transactional order.
</variablelist>
+ <para>
+ The table below (<xref linkend="high-availability-matrix">) summarizes
+ the capabilities of the various solutions listed above.
+ </para>
+
+ <table id="high-availability-matrix">
+ <title>High Availability, Load Balancing, and Replication Feature Matrix</title>
+ <tgroup cols="9">
+ <thead>
+ <row>
+ <entry>Feature</entry>
+ <entry>Shared Disk Failover</entry>
+ <entry>File System Replication</entry>
+ <entry>Warm Standby Using PITR</entry>
+ <entry>Master-Slave Replication</entry>
+ <entry>Statement-Based Replication Middleware</entry>
+ <entry>Asynchronous Multi-Master Replication</entry>
+ <entry>Synchronous Multi-Master Replication</entry>
+ <entry>Data Partitioning</entry>
+ </row>
+ </thead>
+
+ <tbody>
+
+ <row>
+ <entry>No special hardware required</entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ </row>
+
+ <row>
+ <entry>Allows multiple master servers</entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center"></entry>
+ </row>
+
+ <row>
+ <entry>No master server overhead</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ </row>
+
+ <row>
+ <entry>Master server never locks others</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ </row>
+
+ <row>
+ <entry>Master failure will never lose data</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center"></entry>
+ </row>
+
+ <row>
+ <entry>Slaves accept read-only queries</entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ </row>
+
+ <row>
+ <entry>Per-table granularity</entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ </row>
+
+ <row>
+ <entry>No conflict resolution necessary</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center"></entry>
+ <entry align="center"></entry>
+ <entry align="center">&bull;</entry>
+ <entry align="center">&bull;</entry>
+ </row>
+
+ </tbody>
+ </tgroup>
+ </table>
+
+ <para>
+ Many of the above solutions allow multiple servers to handle multiple
+ queries, but none allow a single query to use multiple servers to
+ complete faster. Multi-server parallel query execution allows multiple
+ servers to work concurrently on a single query. This is usually
+ accomplished by splitting the data among servers and having each server
+ execute its part of the query and return results to a central server
+ where they are combined and returned to the user. Pgpool-II has this
+ capability.
+ </para>
+
</chapter>