Procedure documentationHot Standby Locate this document in the navigation structure


A hot standby system consists of a master database and one or more standby databases that are updated at regular intervals. Master and standby databases together form a cluster that, outwardly, behaves like a single database. More information: Architecture of a Hot Standby System

The standby databases are in the STANDBY operational state, which corresponds to a state between ADMIN and ONLINE. In this state, they are not full-fledged databases (they do not, for example, write log entries).

If the master database fails, one of the standby databases automatically becomes the new master database. More information: What Happens When the Master Database of a Hot Standby System Fails?

The main advantage of a hot standby system is the short downtime when errors occur (much shorter than the time it would take to restart the master database, for example).

Note that a hot standby system does not protect you against database errors and inconsistencies caused by users or applications. To prevent these types of errors, you have to perform regular data and log backups. More information: Backing Up Databases


System Requirements for a Hot Standby System


  • The computers on which the master and standby databases are located use hardware with the same amount of memory and the same type and number of processors.

  • Every computer has a unique name.

  • All computers are switched on.

Operating System

  • You have administrator rights on all computers.

Database Software

  • The same database software version is installed on all computers.

  • All paths and file names are the same for master and standby databases (run directory, volumes, trace files, and so on).

  • The respective SAP MaxDB X servers are running on all computers.

Cluster Software

The computers on which the master and standby databases of the hot standby system are located are part of a cluster. For information on setting up a cluster, see the manufacturer’s documentation.

Cluster software requirements:

  • Fail-over mechanism that allows the master and standby databases to switch roles in case of errors

  • IP switching

    Master and standby databases each have their own IP address

Setting up of a hot standby system for SAP MaxDB was tested for the following cluster software: IBM High Availability Cluster Multiprocessing for AIX clustering.

Storage System

The data and log areas of the master and standby databases are located in a storage system. For information about setting up the storage system, see the manufacturer’s documentation.

Memory management system requirements:

  • All databases can access the log area simultaneously.

    Ideally, the storage system should offer two different authorizations for access to the log area: read-only and read and write.

  • Every data volume of each database involved has its own physical memory area in the storage system. To avoid collisions of I/O accesses, we recommend separate hard disks in the storage system.

    The data volumes of each database have the same access path on their respective computers; alternatively, a corresponding symbolic link can be set up.

    Consistent copies of the data volumes (split) can be created that can be read and written-to independently of one another after the split. While a copy is being created, the master database can continue to write to its data volume so that downtimes remain minimal. After the split, the data volumes of the master and standby databases are completely independent of each other.

  • Fast copying of data within the storage system

    When standby databases are initialized or if an error occurs, large amounts of data may have to be copied.

  • Fast transmission of data between the storage system and the computers on which the databases are located

The following storage systems fulfill these requirements and support the setting up of a hot standby system for SAP MaxDB: EMC Symmetrix, IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC).


  1. Set up the cluster consisting of the computers on which the master and standby databases are to be located.

    For information on configuring the cluster, see the documentation for your cluster software.

  2. Using the cluster software, define a Virtual Server Name with which the system is to be addressed externally.

    The Virtual Server Name must not be the same as the name of a computer in the cluster.

  3. Set up your storage system.

    For information on configuring the storage system, see the manufacturer’s documentation.

  4. Create a database on the computer that you have defined as the Virtual Server.

    The volumes of the database must be located in the storage system.

  5. For the following steps, use Database Manager CLI.

    Configure the database as the master database of the hot standby system and add one or more standby databases.

    When you add a new standby database, the storage system copies the complete content of the master database in a consistent state, for example, the state at the last savepoint, to the data area of the standby database.

    Note that master and standby databases use the same values for their database parameters. When you change a database parameter in the master database, the system automatically transfers the changes to the standby databases.


At regular intervals, the master database informs the standby databases of the position in the log area up to which it has written new redo log entries to the shared log area. To configure this delay, use the HotStandbySyncInterval special database parameter.

The system regularly updates the standby databases by importing the redo log entries of the master database up to a certain point in time from the shared log area and repeating the corresponding data changes. Thus the data area of the standby database, with some delay, always corresponds to the data area of the master database. To configure this delay, use the HotStandbyDelayTime? special database parameter.

More information: Special Database Parameters