aboutsummaryrefslogtreecommitdiff
path: root/doc/src/sgml/failover.sgml
blob: 664c92f2e7345571f67198cb53f1a2cf14ffcd20 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
<!-- $PostgreSQL: pgsql/doc/src/sgml/failover.sgml,v 1.4 2006/11/14 21:43:00 momjian Exp $ -->

<chapter id="failover">
 <title>Failover, Replication, Load Balancing, and Clustering Options</title>

 <indexterm><primary>failover</></>
 <indexterm><primary>replication</></>
 <indexterm><primary>load balancing</></>
 <indexterm><primary>clustering</></>

 <para>
  Database servers can work together to allow a backup server to
  quickly take over if the primary server fails (failover), or to
  allow several computers to serve the same data (load balancing).
  Ideally, database servers could work together seamlessly.  Web
  servers serving static web pages can be combined quite easily by
  merely load-balancing web requests to multiple machines.  In
  fact, read-only database servers can be combined relatively easily
  too.  Unfortunately, most database servers have a read/write mix
  of requests, and read/write servers are much harder to combine.
  This is because though read-only data needs to be placed on each
  server only once, a write to any server has to be propagated to
  all servers so that future read requests to those servers return
  consistent results.
 </para>

 <para>
  This synchronization problem is the fundamental difficulty for servers
  working together.  Because there is no single solution that eliminates
  the impact of the sync problem for all use cases, there are multiple
  solutions.  Each solution addresses this problem in a different way, and
  minimizes its impact for a specific workload.
 </para>

 <para>
  Some failover and load balancing solutions are synchronous, meaning that
  a data-modifying transaction is not considered committed until all
  servers have committed the transaction.  This guarantees that a failover
  will not lose any data and that all load-balanced servers will return
  consistent results with no propagation delay. Asynchronous updating has
  a small delay between the time of commit and its propagation to the
  other servers, opening the possibility that some transactions might be
  lost in the switch to a backup server, and that load balanced servers
  might return slightly stale results.  Asynchronous communication is used
  when synchronous would be too slow.
 </para>

 <para>
  Solutions can also be categorized by their granularity.  Some solutions
  can deal only with an entire database server, while others allow control
  at the per-table or per-database level.
 </para>

 <para>
  Performance must be considered in any failover or load balancing
  choice.  There is usually a tradeoff between functionality and
  performance.  For example, a full synchronous solution over a slow
  network might cut performance by more than half, while an asynchronous
  one might have a minimal performance impact.
 </para>

 <para>
  The remainder of this section outlines various failover, replication,
  and load balancing solutions.
 </para>

 <sect1 id="shared-disk-failover">
  <title>Shared Disk Failover</title>

  <para>
   Shared disk failover avoids synchronization overhead by having only one
   copy of the database.  It uses a single disk array that is shared by
   multiple servers.  If the main database server fails, the backup server
   is able to mount and start the database as though it was recovering from
   a database crash.  This allows rapid failover with no data loss.
  </para>

  <para>
   Shared hardware functionality is common in network storage devices.  One
   significant limitation of this method is that if the shared disk array
   fails or becomes corrupt, the primary and backup servers are both
   nonfunctional.
  </para>
 </sect1>

 <sect1 id="warm-standby-using-point-in-time-recovery">
  <title>Warm Standby Using Point-In-Time Recovery</title>

  <para>
   A warm standby server (see <xref linkend="warm-standby">) can
   be kept current by reading a stream of write-ahead log (WAL)
   records.  If the main server fails, the warm standby contains
   almost all of the data of the main server, and can be quickly
   made the new master database server.  This is asynchronous and
   can only be done for the entire database server.
  </para>
 </sect1>

 <sect1 id="continuously-running-replication-server">
  <title>Continuously Running Replication Server</title>

  <para>
   A continuously running replication server allows the backup server to
   answer read-only queries while the master server is running.  It
   receives a continuous stream of write activity from the master server.
   Because the backup server can be used for read-only database requests,
   it is ideal for data warehouse queries.
  </para>

  <para>
   Slony-I is an example of this type of replication, with per-table
   granularity.  It updates the backup server in batches, so the replication
   is asynchronous and might lose data during a fail over.
  </para>
 </sect1>

 <sect1 id="data-partitioning">
  <title>Data Partitioning</title>

  <para>
   Data partitioning splits tables into data sets.  Each set can only be
   modified by one server.  For example, data can be partitioned by
   offices, e.g. London and Paris.  While London and Paris servers have all
   data records, only London can modify London records, and Paris can only
   modify Paris records.
  </para>

  <para>
   Such partitioning implements both failover and load balancing.  Failover
   is achieved because the data resides on both servers, and this is an
   ideal way to enable failover if the servers share a slow communication
   channel. Load balancing is possible because read requests can go to any
   of the servers, and write requests are split among the servers.  Of
   course, the communication to keep all the servers up-to-date adds
   overhead, so ideally the write load should be low, or localized as in
   the London/Paris example above.
  </para>

  <para>
   Data partitioning is usually handled by application code, though rules
   and triggers can be used to keep the read-only data sets current.  Slony-I
   can also be used in such a setup.  While Slony-I replicates only entire
   tables, London and Paris can be placed in separate tables, and
   inheritance can be used to access both tables using a single table name.
  </para>
 </sect1>

 <sect1 id="query-broadcast-load-balancing">
  <title>Query Broadcast Load Balancing</title>

  <para>
   Query broadcast load balancing is accomplished by having a program
   intercept every query and send it to all servers.  Read-only queries can
   be sent to a single server because there is no need for all servers to
   process it.  This is unusual because most replication solutions have
   each write server propagate its changes to the other servers.  With
   query broadcasting, each server operates independently.
  </para>

  <para>
   Because each server operates independently, functions like
   <function>random()</>, <function>CURRENT_TIMESTAMP</>, and
   sequences can have different values on different servers.  If
   this is unacceptable, applications must query such values from
   a single server and then use those values in write queries.
   Also, care must also be taken that all transactions either commit
   or abort on all servers  Pgpool is an example of this type of
   replication.
  </para>
 </sect1>

 <sect1 id="clustering-for-load-balancing">
  <title>Clustering For Load Balancing</title>

  <para>
   In clustering, each server can accept write requests, and these
   write requests are broadcast from the original server to all
   other servers before each transaction commits.  Heavy write
   activity can cause excessive locking, leading to poor performance.
   In fact, write performance is often worse than that of a single
   server.  Read requests can be sent to any server.  Clustering
   is best for mostly read workloads, though its big advantage is
   that any server can accept write requests --- there is no need
   to partition workloads between read/write and read-only servers.
  </para>

  <para>
   Clustering is implemented by <productname>Oracle</> in their
   <productname><acronym>RAC</></> product.  <productname>PostgreSQL</>
   does not offer this type of load balancing, though
   <productname>PostgreSQL</> two-phase commit (<xref
   linkend="sql-prepare-transaction-title"> and <xref linkend=
   "sql-commit-prepared-title">) can be used to implement this in
   application code or middleware.
  </para>
 </sect1>

 <sect1 id="clustering-for-parallel-query-execution">
  <title>Clustering For Parallel Query Execution</title>

  <para>
   This allows multiple servers to work concurrently on a single
   query.  One possible way this could work is for the data to be
   split among servers and for each server to execute its part of
   the query and results sent to a central server to be combined
   and returned to the user.  There currently is no
   <productname>PostgreSQL</> open source solution for this.
  </para>
 </sect1>

 <sect1 id="commercial-solutions">
  <title>Commercial Solutions</title>

  <para>
   Because <productname>PostgreSQL</> is open source and easily
   extended, a number of companies have taken <productname>PostgreSQL</>
   and created commercial closed-source solutions with unique
   failover, replication, and load balancing capabilities.
  </para>
 </sect1>

</chapter>