iSelfSchooling.com  Since 1999     References  |  Search more  | Oracle Syntax  | Free Online Oracle Training

    Home      .Services     Login       Start Learning     Certification      .                 .Share your BELIEF(s)...

 

. Online Accounting        .Copyright & User Agreement   |
    .Vision      .Biography     .Acknowledgement

.Contact Us      .Comments/Suggestions       Email2aFriend    |

 

More Resources by Google:

 

SRDF (SUN)

Advanced SRDF Operations: 

Disaster Recovery

Summary:

In the previous lab exercise, a File system was created using SRDF R1 volumes on a SUN server. Data files were then added to the file system.  The previous exercise also demonstrated how to suspend and resume SRDF links, and how to check for invalid tracks using the SYMCLI monitoring tools.  This exercise will build on this understanding and demonstrate how SRDF can be used as part of a disaster recovery plan.

Objectives:

a)     Make the target (R2) volumes active, using SRDF failover procedures.

b)     Make the source (R1) volumes active, using SRDF failback procedures.


This exercise builds on the previous exercise.  So first we will confirm that the hosts are properly configured for SRDF and that the SRDF volumes are in the proper state.  Next, this exercise will set up a scenario where the local host or source  Symmetrix ICDA has failed (site failure) and the SRDF target volumes and the data contained in them, are made available to the remote host attached to the target Symmetrix.

 

1)  Verify that SRDF is properly configured on both the local and remote hosts.

a)     Use the following SYMCLI commands to list and verify the device groups you created in the previous exercise.
# symdg list  (on both the local and remote hosts)
# symdg show mysrcdg
(on the local host)

# symdg show mytgtdg (on the remote host)

 

b)     Check the environment variables for SYMCLI and if necessary set the SYMCLI_DG variable to your device group.
# symcli –def
# SYMCLI_DG=mysrcdg
 (on the local host)
# export SYMCLI_DG

# SYMCLI_DG=mytgtdg  (on the remote host)
# export SYMCLI_DG

c)      Verify the status of the devices in your device group. The local host should have RW access to the source (R1) volumes, the remote host should see the target (R2) volumes as being WD, the SRDF link should be enabled (RW), and there should be no invalid tracks.
# symrdf query  (on the local host)

2)   Verify that filesystem is mounted and available for use.  Perform this step on the local host.

a)  Is the filesystem available?
# mount

b)     If not mounted, execute the following command to mount it.
# mount /dev/dsk/c#t#d#s0 /mymp      

c)  Add 5 more data files to your filesystem using the following script.
# ksh
# i=6; while [ i -le 10 ]
    do
     symdev –v list > /mymp/src_data$i
     let i=$i+1
    done

3)   Pass control of the SRDF volume and associated data to the remote host.  Before we test disaster failover, we will do a normal failover to verify the procedure.  This step not only involves passing control of the SRDF volumes to the remote host, it also requires that the remote host understands how the volumes are configured. If the local host has created LVM entities on the SRDF source volumes, then the Volume Group information must be imported into the remote host, after a failover.

a)     While in a disaster situation it is not possible to ”gracefully” shutdown applications and unmount filesystems, it is always less risky to do so when ever possible.  Before passing control of the SRDF volume to the remote host, we will first unmount the filesystem on the local host.

i)        Unmount the filesystem using the following command:
# umount /mymp  (on the local host)

b)     Initiate failover of the SRDF volumes by executing the following command from the remote host.  The failover command can be executed from the local host.  How ever, in a true disaster situation, we may not have access to the local host.
# symrdf failover (on the remote host)

Note the verbose output from the failover command.  Each step is displayed as it is executed.  The output could be piped to a file and saved for future study or reference.  More detailed information is logged in the /var/symapi/log directory.

c)      View the status of both the R1 and R2 volumes.
# symrdf query (on the remote host)

What level of access does the local host have to the source (R1) volumes?

 
What level of access does the remote host have to the target (R2) volumes? 
The source volumes should be Write Disabled (WD) and the target volumes Read/Write (RW).  Even though the local host has read access to the source volumes, use caution when accessing it as integrity cannot be guaranteed. 

d)     Before the remote  SUN host can use the data on the SRDF target volumes a directory mount point must be created to mount the filesystem for the SUN .
Execute the commands below, on the remote host, to mount the mirrored file system.

i)        Create a mount point directory on the remote host for the mirrored file system.
# mkdir /mymp 

ii)      Mount the filesystem

 (check to make certain the correct volume is being specified)

# mount /dev/dsk/c#t#d#s0 /mymp

iii)    Verify that the 10 files we created previously, from the local host on the source volumes, are available.
# ls –l /mymp

4)   Resume activities on the target host. The remote host now has full access to the SRDF volumes and associated data. To simulate production after a failover, we will make some changes to the filesystem.

a)   Execute the following commands on the remote host to add more data to the /mymp filesystem. Ensure a korn shell is loaded.
# i=11; while [ i -le 20 ]
        do
          symdev –v list > /mymp/src_data$i
          let i=$i+1
        done

b)     While changes are being made to the SRDF (R2) volumes from the remote host, the link between the source and target volumes is disabled.  Check to see how many invalid tracks have accumulated, using the appropriate command.
# symrdf query
How many invalid tracks are there?

 ___________________________
How many MB does this represent? 

___________________________
Are the invalid tracks on the R1 or R2 Volumes?

 ____________________

 

5)   Pass control of the SRDF volumes and data back to the local host.

a)     The remote host should not have the filesystem mounted while control is being passed back to the local host because the remote host’s access to the target volumes will change to Read Only.   Attempting to write to the filesystem while the volumes are Write Disabled will cause unpredictable results.
Unmount the filesystem and deactivate the Volume Group.
# umount  /mymp  (on the remote host)

b)     Make the source volumes active by executing the following command.
# symrdf failback  (on the remote host)

c)      View the status of both the source (R1) and the target  (R2) volumes.
# symrdf query  (on the remote host)
What level of access does the local host have to the source volumes?
______________________________
What level of access does the remote host have to the target volumes? _____________________________
Is the link enabled? ___________

d)     On the local host, mount the filesystem and verify that the test files that were created while the remote host had control of the SRDF volume are there. 
Mount the file system and check that the new src_data files exist.

# mount /dev/dsk/c#t#d#s0 /mymp

# ls –al /mymp
Are the additional src_data files that were created from the remote host on the target volumes available? _______________________________

6)  Exercise Complete.  This concludes SRDF Lab 2.  We have explored the use of SRDF for disaster recovery applications. 

Exercise Wrap-up:

symrdf failover ____________________________

symrdf failback ____________________________

What are the Volume Manager considerations prior to a failover or a failback?

 ___________

What could you do after a failover, but before a failback?

   

 


 

 

Google
 
Web web site