A hot standby system consists of a master and one or more standby instances, which can be located on different computers. The master and standby instances each have their own kernel, cache, a separate MaxDB X Server, DBM Server and so on.
The computers on which the database instances are located constitute a cluster. Communication within the cluster is done by way of the internal IP addresses of the computers. Outwardly, the hot standby system behaves like a single database instance. To access the hot standby system externally, you need the following information:
· Name of the database instance
· Virtual server name (identifies the computer that currently has the role of master instance)
If the master instance fails, the virtual server name transfers to the computer with the standby instance, which assumes the master role (see Behavior of the Hot Standby System when Errors Occur).
The data and log areas of all instances are located in a storage system.
As a precaution for the event of hardware failure, we recommend that you mirror the data and log areas in the storage system.
Master and standby instances use the same log area; however, standby instances only have read access (see Synchronization of Master and Standby Instances).
In contrast, standby databases have separate log areas for all instances.
Master and standby instances each have separate data areas that are independent of each other.
The following graphic shows a hot standby system comprised of a master instance on the computer GENUA and a standby instance on the computer PARMA. The computers GENUA and PARMA are part of a cluster. The name of the database instance is DEMODB and the virtual server name is VIRTUAL_SERVER. The data and log areas of the instances are located in a storage system while the cluster software is on a separate computer.
Example of the Structure of a Hot Standby System
Outwardly, the hot standby system looks like a single database instance DEMODB on the computer VIRTUAL_SERVER:
See also: