Host based replication file system snapshot




















Disk-buffered replication mode: a combination of local and remote technologies. A consistent PIT local replica of the source device is fi rst created. This is then replicated to a remote replica on the target array. At the beginning of the cycle, the network links between the two arrays are suspended, and there is no transmission of data. While production application runs on the source device, a consistent PIT local replica of the source device is created. The network links are enabled, and data on the local replica in the source array transmits to its remote replica in the target array.

Data migration solutions Data mobility refers to moving data between heterogeneous storage arrays for cost, performance, or any other reason. It helps implement a tiered storage strategy.

Data migration refers to moving data from one storage array to other heterogeneous storage arrays for technology refresh, consolidation, or any other reason. The array performing the replication operations is called the control array. Data migration solutions perform push and pull operations for data movement. These terms are defined from the perspective of the control array. In the push operation, data is moved from the control array to the remote array.

The control device, therefore, acts like the source, while the remote device is the target. In the pull operation, data is moved from the remote array to the control array. The remote device is the source, and the control device is the target.

The push and pull operations can be either hot or cold. These terms apply to the control devices only. In a cold operation the control device is inaccessible to the host during replication. Cold operations guarantee data consistency because both the control and the remote devices are offl ine. In a hot operation the control device is online for host operations. During hot push and pull operations, changes can be made to the control device because the control array can keep track of all changes and thus ensure data integrity.

In hypervisor-to-hypervisor VM migration, the entire active state of a VM is moved from one hypervisor to another. Because the virtual disks of the VMs are not migrated, this technique requires both source and target hypervisor access to the same storage. In array-to-array VM migration, virtual disks are moved from the source array to the remote array. This approach enables the administrator to move VMs across dissimilar storage arrays. Array-to-array migration starts by copying the metadata about the VM from the source array to the target.

The metadata essentially consists of configuration, swap, and log files. After the metadata is copied, the VM disk file is replicated to the new location. Skip to content. Flushing the file system buffer. File system snapshot. Explore Audiobooks. Bestsellers Editors' Picks All audiobooks. Explore Magazines. Editors' Picks All magazines. Explore Podcasts All podcasts. Difficulty Beginner Intermediate Advanced.

Explore Documents. Chapter Uploaded by amant. Document Information click to expand document information Description: Information and storage management.

Did you find this document useful? Is this content inappropriate? Report this Document. Description: Information and storage management. Flag for inappropriate content. Download now. Save Save Chapter 13 For Later. Related titles.

Carousel Previous Carousel Next. Jump to Page. Search inside document. Chapter Objective After completing this chapter you will be able to: o Discuss local replication and the possible uses of local replicas o Explain consistency considerations when replicating file systems and databases o Discuss host and array based replication technologies o Functionality o Differences o Considerations o Selecting the appropriate technology EMC Corporation. When you perform the instructions the first time, complete the steps as written for the primary cluster and the standby cluster.

The second time, reverse the primary and standby roles. Perform the steps marked as necessary on the primary cluster on your standby cluster and perform the steps marked as necessary on the standby cluster on your primary cluster. The procedures that must be performed twice are described in:. Getting a public key for repluser from the primary cluster. Getting host keys for the standby cluster. Notes on permissions for ssh-related files.

Notes on sshd configuration. Validating your ssh-related key configuration. After you have completed all the necessary procedures, you can use the instructions described in Validating your ssh-related key configuration to confirm that you have configured ssh correctly in both directions.

Oracle ACFS snapshot-based replication uses ssh as the transport between the primary and standby clusters, so the user identity under which replication is performed on the standby must be carefully managed. In the replication process, the replication user repluser on the primary node where replication is running uses ssh to log in to the standby node involved in replication.

The user chosen as repluser should have Oracle ASM administrator privileges. The user specified to the Oracle installer when the Oracle software was first installed usually belongs to the needed groups, so it is convenient to choose the replication user.

In this discussion, the replication user is identified as repluser ; however, you would replace repluser with the actual user name that you have selected. The same user and group identities must be specified for repluser on both your primary cluster and your standby cluster.

Additionally, the mappings between user names and numeric uids , and between group names and numeric gids , must be identical on both the primary cluster and the standby cluster. This is required to ensure that the numeric values are used in the same manner on both clusters because replication transfers only the numeric values from the primary to standby. The process of distributing keys for Oracle ACFS replication includes getting a public key from the primary cluster, getting host keys for the standby cluster, ensuring permissions are configured properly for ssh -related files, configuring sshd as necessary, and lastly validating the ssh configuration.

When creating host keys, ensure that you create keys for both fully-qualified domain hostnames and the local hostnames. A public key for repluser defined on each node of your primary cluster must be known to repluser on each node of your standby cluster.

If this directory does not exist, then create it with access only for repluser. Ensure that an ls command for the. If a public key file for repluser exists on a given primary node, then add its contents to the set of keys authorized to log in as repluser on each node of the standby where replication is run. If a public key file does not exist, generate a public and private key pair on the primary by running the following command as repluser.

You can press the enter key in response to each prompt issued by the command. Copy the resulting. If each primary node has its own public key for repluser , then all the public keys must be added to the file. A host key for each standby node where replication may run must be known on each primary node where replication may run. One way to generate the correct key is to run ssh manually as repluser from each primary node to each standby node.

If the correct host key is not known already, then a warning displays and you can enable ssh to add the key. If you respond with yes , then the ssh setup is complete. After the host key setup for standby nodes is complete on a given primary node, you need to perform an additional step if you use a Virtual IP address VIP to communicate with your standby cluster. Ultimately, the host key configuration performed on this first node of your primary cluster must be performed on every node in your primary cluster; the result of the above sequence, or an equivalent, must exist on each primary node.

By default, replication enables strict host key checking by ssh , to ensure that the primary node connects to the intended standby node or cluster when it runs ssh.

However, if you are certain that this checking is unneeded, such as the case when the primary and standby clusters communicate over a private network, the use of strict host key checking by ssh can be disabled. If strict host key checking is disabled, then no host key setup is required. For information about the acfsutil repl init command, refer to acfsutil repl init. For ssh to work with the keys you have established, you must ensure that permissions are set properly on each node for the.

For details on the permissions that should be given to each. After you begin using replication, ssh is started frequently to perform replication operations. On some platforms, the ssh daemon sshd may be configured to log a message through syslog or a similar facility each time an ssh connection is established. The parameter that controls logging is called LogLevel. Connection messages are issued at level INFO.

For example, you can suppress log messages by adding the following line to the file:. After you have established the host and user keys for ssh on both your primary and your standby clusters, you can use the command acfsutil repl info -c -u to validate the keys. You run this command as repluser on each node of each cluster.

It takes as arguments all the hostnames or addresses on the remote cluster that the local cluster may use in the future to perform replication. If you are not using a VIP to connect to your remote cluster, then for a given replication relationship, only one remote hostname or address is provided to acfsutil repl init primary.

However, if future relationships involve other remote host addresses, specify the complete set of remote addresses when running the acfsutil repl info -c -u command. If you are using a VIP to connect to your remote cluster, then you should specify the names or host-specific addresses of all remote hosts on which the VIP may be active.

When replication uses ssh to connect to a VIP, the host key returned is the key associated with the host where the VIP is currently active. Only the hostnames or addresses of individual remote nodes are used by ssh in this situation. In the command, standby n specifies the standby cluster hostname or address.

The validation command confirms that user repluser can use ssh to connect to each standby hostname or address given, in the same manner as replication initialization. Do not specify the name of the VIP. After you have confirmed that each node of your primary cluster can connect to all nodes of your standby cluster, run the validation command again.

This time run the command on each node of your standby cluster. Specify a hostname or IP address for all nodes of your primary cluster using the following format:. In the command, primary n specifies the primary cluster hostname or address. This section describes how to install Cygwin and start the ssh daemon on Microsoft Windows hosts. The following table lists the names of the groups and resources that are created for the example configuration.

With the exception of devgrp-stor-rg , the names of the groups and resources are example names that can be changed as required. The replication resource group must have a name with the format devicegroupname -stor-rg. Search Scope:. Document Information Preface 1. Introduction to Administering Oracle Solaris Cluster 2. Shutting Down and Booting a Cluster 4.

Data Replication Approaches 5. Administering Quorum 7. Administering Cluster Interconnects and Public Networks 8. Adding and Removing a Node 9. Administering the Cluster Updating Your Software Backing Up and Restoring a Cluster A. Understanding Availability Suite Software in a Cluster This section introduces disaster tolerance and describes the data replication methods that Availability Suite software uses.

Data Replication Methods Used by Availability Suite Software This section describes the remote mirror replication method and the point-in-time snapshot method used by Availability Suite software. Remote Mirror Replication Figure A-1 shows remote mirror replication. Figure A-1 Remote Mirror Replication Remote mirror replication can be performed synchronously in real time, or asynchronously.

Point-in-Time Snapshot Figure A-2 shows a point-in-time snapshot. Figure A-2 Point-in-Time Snapshot Replication in the Example Configuration Figure A-3 illustrates how remote mirror replication and point-in-time snapshot are used in this example configuration. A replication resource group must have the following characteristics: Be a failover resource group A failover resource can run on only one node at a time. Have a logical hostname resource A logical hostname is hosted on one node of each cluster primary and secondary and is used to provide source and target addresses for the Availability Suite software data replication stream.

Have an HAStoragePlus resource The HAStoragePlus resource enforces the failover of the device group when the replication resource group is switched over or failed over. Be named after the device group with which it is colocated, followed by -stor-rg For example, devgrp-stor-rg.

Be online on both the primary cluster and the secondary cluster Configuring Application Resource Groups To be highly available, an application must be managed as a resource in an application resource group. This section provides guidelines for configuring the following application resource groups: Configuring Resource Groups for a Failover Application Configuring Resource Groups for a Scalable Application Configuring Resource Groups for a Failover Application In a failover application, an application runs on one node at a time.

A resource group for a failover application must have the following characteristics: Have an HAStoragePlus resource to enforce the failover of the file system or zpool when the application resource group is switched over or failed over. Must be online on the primary cluster and offline on the secondary cluster.

Figure A-4 Configuration of Resource Groups in a Failover Application Configuring Resource Groups for a Scalable Application In a scalable application, an application runs on several nodes to create a single, logical service. A resource group for a scalable application must have the following characteristics: Have a dependency on the shared address resource group The nodes that are running the scalable application use the shared address to distribute incoming data.

Be online on the primary cluster and offline on the secondary cluster Figure A-5 illustrates the configuration of resource groups in a scalable application. Figure A-5 Configuration of Resource Groups in a Scalable Application Guidelines for Managing a Takeover If the primary cluster fails, the application must be switched over to the secondary cluster as soon as possible.

To switch back to the original primary cluster, perform the following tasks: Synchronize the primary cluster with the secondary cluster to ensure that the primary volume is up-to-date. Start the resource group on the primary cluster. Update the DNS so that clients can access the application on the primary cluster.

Configure device groups, file systems for the NFS application, and resource groups on the primary cluster and on the secondary cluster. All nodes must use the same version of the Oracle Solaris OS.

Oracle Solaris Cluster 4. Solaris Volume Manager software. All nodes must use the same version of volume manager software. Different clusters can use different versions of Oracle Solaris OS and Oracle Solaris Cluster software, but you must use the same version of Availability Suite software between clusters. For information about the latest software updates, log into My Oracle Support.



0コメント

  • 1000 / 1000