Backup and Restorebackup>>
As everything that contains valuable data, PostgreSQL>
databases should be backed up regularly. While the procedure is
essentially simple, it is important to have a basic understanding of
the underlying techniques and assumptions.
There are two fundamentally different approaches to backing up
PostgreSQL> data:
SQL> dumpFile system level backupSQL> Dump
The idea behind the SQL-dump method is to generate a text file with SQL
commands that, when fed back to the server, will recreate the
database in the same state as it was at the time of the dump.
PostgreSQL> provides the utility program
pg_dump> for this purpose. The basic usage of this
command is:
pg_dump dbname > outfile
As you see, pg_dump> writes its results to the
standard output. We will see below how this can be useful.
pg_dump> is a regular PostgreSQL>
client application (albeit a particularly clever one). This means
that you can do this backup procedure from any remote host that has
access to the database. But remember that pg_dump>
does not operate with special permissions. In particular, you must
have read access to all tables that you want to back up, so in
practice you almost always have to be a database superuser.
To specify which database server pg_dump> should
contact, use the command line options
As any other PostgreSQL> client application,
pg_dump> will by default connect with the database
user name that is equal to the current operating system user name. To override
this, either specify the
-U
option or set the
environment variable PGUSER. Remember that
pg_dump> connections are subject to the normal
client authentication mechanisms (which are described in ).
Dumps created by pg_dump> are internally consistent,
that is, updates to the database while pg_dump> is
running will not be in the dump. pg_dump> does not
block other operations on the database while it is working.
(Exceptions are those operations that need to operate with an
exclusive lock, such as VACUUM FULL.)
When your database schema relies on OIDs (for instance as foreign
keys) you must instruct pg_dump> to dump the OIDs
as well. To do this, use the
-o
command line
option. Large objects> are not dumped by default, either.
See pg_dump>'s command reference page if you use
large objects.
Restoring the dump
The text files created by pg_dump> are intended to
be read in by the psql program. The
general command form to restore a dump is
psql dbname < infile
where infile is what
you used as outfile
for the pg_dump> command. The database dbname will not be created by this
command, you must create it yourself from template0> before executing
psql> (e.g., with createdb -T template0
dbname>).
psql> supports similar options to pg_dump>
for controlling the database server location and the user name. See
its reference page for more information.
If the objects in the original database were owned by different
users, then the dump will instruct psql> to connect
as each affected user in turn and then create the relevant
objects. This way the original ownership is preserved. This also
means, however, that all these users must already exist, and
furthermore that you must be allowed to connect as each of them.
It might therefore be necessary to temporarily relax the client
authentication settings.
Once restored, it is wise to run ANALYZE> on each
database so the optimizer has useful statistics. You
can also run vacuumdb -a -z> to ANALYZE> all
databases.
The ability of pg_dump> and psql> to
write to or read from pipes makes it possible to dump a database
directly from one server to another; for example:
pg_dump -h host1> dbname> | psql -h host2> dbname>
The dumps produced by pg_dump> are relative to
template0>. This means that any languages, procedures,
etc. added to template1> will also be dumped by
pg_dump>. As a result, when restoring, if you are
using a customized template1>, you must create the
empty database from template0>, as in the example
above.
Restore performance can be improved by increasing the
configuration parameter maintenance_work_mem
(see ).
Using pg_dumpall>
The above mechanism is cumbersome and inappropriate when backing
up an entire database cluster. For this reason the
pg_dumpall> program is provided.
pg_dumpall> backs up each database in a given
cluster, and also preserves cluster-wide data such as
users and groups. The call sequence for
pg_dumpall> is simply
pg_dumpall > outfile>
The resulting dump can be restored with psql>:
psql template1 < infile
(Actually, you can specify any existing database name to start from,
but if you are reloading in an empty cluster then template1>
is the only available choice.) It is always necessary to have
database superuser access when restoring a pg_dumpall>
dump, as that is required to restore the user and group information.
Large Databases
Since PostgreSQL allows tables larger
than the maximum file size on your system, it can be problematic
to dump such a table to a file, since the resulting file will likely
be larger than the maximum size allowed by your system. As
pg_dump> can write to the standard output, you can
just use standard Unix tools to work around this possible problem.
Use compressed dumps.
You can use your favorite compression program, for example
gzip.
pg_dump dbname | gzip > filename.gz
Reload with
createdb dbname
gunzip -c filename.gz | psql dbname
or
cat filename.gz | gunzip | psql dbnameUse split>.
The split command
allows you to split the output into pieces that are
acceptable in size to the underlying file system. For example, to
make chunks of 1 megabyte:
pg_dump dbname | split -b 1m - filename
Reload with
createdb dbname
cat filename* | psql dbnameUse the custom dump format.
If PostgreSQL was built on a system with the zlib> compression library
installed, the custom dump format will compress data as it writes it
to the output file. For large databases, this will produce similar dump
sizes to using gzip, but has the added advantage that the tables can be
restored selectively. The following command dumps a database using the
custom dump format:
pg_dump -Fc dbname > filename
See the pg_dump> and pg_restore> reference pages for details.
Caveatspg_dump> (and by implication
pg_dumpall>) has a few limitations which stem from
the difficulty of reconstructing certain information from the system
catalogs.
Specifically, the order in which pg_dump> writes
the objects is not very sophisticated. This can lead to problems
for example when functions are used as column default values. The
only answer is to manually reorder the dump. If you created
circular dependencies in your schema then you will have more work
to do.
For reasons of backward compatibility, pg_dump>
does not dump large objects by default.large
objectbackup To dump
large objects you must use either the custom or the TAR output
format, and use the
-b> option in
pg_dump>. See the reference pages for details. The
directory contrib/pg_dumplo> of the
PostgreSQL> source tree also contains a program
that can dump large objects.
Please familiarize yourself with the
pg_dump>> reference page.
File system level backup
An alternative backup strategy is to directly copy the files that
PostgreSQL> uses to store the data in the database. In
it is explained where these files
are located, but you have probably found them already if you are
interested in this method. You can use whatever method you prefer
for doing usual file system backups, for example
tar -cf backup.tar /usr/local/pgsql/data
There are two restrictions, however, which make this method
impractical, or at least inferior to the pg_dump>
method:
The database server must> be shut down in order to
get a usable backup. Half-way measures such as disallowing all
connections will not work as there is always some buffering
going on. Information about stopping the server can be
found in . Needless to say
that you also need to shut down the server before restoring the
data.
If you have dug into the details of the file system layout of the
database, you may be tempted to try to back up or restore only certain
individual tables or databases from their respective files or
directories. This will not> work because the
information contained in these files contains only half the
truth. The other half is in the commit log files
pg_clog/*, which contain the commit status of
all transactions. A table file is only usable with this
information. Of course it is also impossible to restore only a
table and the associated pg_clog data
because that would render all other tables in the database
cluster useless.
An alternative file-system backup approach is to make a
consistent snapshot of the data directory, if the
file system supports that functionality (and you are willing to
trust that it is implemented correctly). The typical procedure is
to make a frozen snapshot> of the volume containing the
database, then copy the whole data directory (not just parts, see
above) from the snapshot to a backup device, then release the frozen
snapshot. This will work even while the database server is running.
However, a backup created in this way saves
the database files in a state where the database server was not
properly shut down; therefore, when you start the database server
on the backed-up data, it will think the server had crashed
and replay the WAL log. This is not a problem, just be aware of
it.
If your database is spread across multiple volumes (for example,
data files and WAL log on different disks) there may not be any way
to obtain exactly-simultaneous frozen snapshots of all the volumes.
Read your filesystem documentation very carefully before trusting
to the consistent-snapshot technique in such situations.
Note that a file system backup will not necessarily be
smaller than an SQL dump. On the contrary, it will most likely be
larger. (pg_dump does not need to dump
the contents of indexes for example, just the commands to recreate
them.)
Migration Between Releasesupgradingversioncompatibility
As a general rule, the internal data storage format is subject to
change between major releases of PostgreSQL> (where
the number after the first dot changes). This does not apply to
different minor releases under the same major release (where the
number of the second dot changes); these always have compatible
storage formats. For example, releases 7.0.1, 7.1.2, and 7.2 are
not compatible, whereas 7.1.1 and 7.1.2 are. When you update
between compatible versions, then you can simply reuse the data
area in disk by the new executables. Otherwise you need to
back up> your data and restore> it on the new
server, using pg_dump>. (There are checks in place
that prevent you from doing the wrong thing, so no harm can be done
by confusing these things.) The precise installation procedure is
not subject of this section; these details are in .
The least downtime can be achieved by installing the new server in
a different directory and running both the old and the new servers
in parallel, on different ports. Then you can use something like
pg_dumpall -p 5432 | psql -d template1 -p 6543
to transfer your data. Or use an intermediate file if you want.
Then you can shut down the old server and start the new server at
the port the old one was running at. You should make sure that the
database is not updated after you run pg_dumpall>,
otherwise you will obviously lose that data. See for information on how to prohibit
access. In practice you probably want to test your client
applications on the new setup before switching over.
If you cannot or do not want to run two servers in parallel you can
do the back up step before installing the new version, bring down
the server, move the old version out of the way, install the new
version, start the new server, restore the data. For example:
pg_dumpall > backup
pg_ctl stop
mv /usr/local/pgsql /usr/local/pgsql.old
cd ~/postgresql-&version;
gmake install
initdb -D /usr/local/pgsql/data
postmaster -D /usr/local/pgsql/data
psql template1 < backup
See about ways to start and stop the
server and other details. The installation instructions will advise
you of strategic places to perform these steps.
When you move the old installation out of the way
it is no longer perfectly usable. Some parts of the installation
contain information about where the other parts are located. This
is usually not a big problem but if you plan on using two
installations in parallel for a while you should assign them
different installation directories at build time.