How to on veritas cluster file systems on solaris?



  • Veritas Cluster Server.

    The Veritas Cluster File System is a licensed product. You need to have both VCS license and one license for all other packages. Get a license for CFS and VCS. The VRTSlic package must be
    installed to license CFS via the vxlicense command.

    The CFS is part of the functionality of the VxFS.

    The following packages are Cluster Server:
    VRTSvcs - Veritas Cluster Server
    VRTSllt - Veritas Low Latency Transport
    VRTSgab - Veritas Group Membership and Atomic Broadcast
    VRTSperl - Perl for VCS
    VRTSvcsw - Veritas Cluster Manager Web Console
    VRTSweb - Veritas Web GUI Engine
    VRTSvcsdc - Veritas Cluster Docs.

    The following packages are Cluster File system and Volume Manager
    VRTSvxfs - Veritas Fiel system with CFS software.
    VRTScfsdc - Veritas FS docs
    VRTSvxvm - Veritas Volume manager
    VRTSvmdev - Veritas volume manager header and library files
    VRTSvmsa - Veritas volume manager docs
    VRTSvmman - Veritas volume manager man pages
    VRTSglm - Veritas group lock mgr
    VRTScavf - Veritas Cluster server Enterprise agents for Volume manager
    VRTSlic - Veritas license facility
    VRTScscm - Veritas cluster server cluster manager

    The CD will contain
    installvcs - Adds and configures the nodes.
    uninstallvcs - Removes VCS packages.
    cfsinstall - adds CFS based packages.
    cfsdeinstall - removes CFS based packages.

    Some useful paths to setup in the .profile
    PATH=$PATH:/opt/VRTSvxfs/sbin:/opt/VRTSvxfs/lib/vcs:/opt/VRTSvmsa/bin:/usr/lib/fs/vxfs:
    /usr/lib/vxvm/bin:/usr/sbin:/etc/fs/vxfs:/etc/vx/bin

    MANPATH=/opt/VRTS/man:$MANPATH;export MANPATH

    Resources and Service Groups for the CFS.
    CFS: CFSMount and CFSfsckd
    CVM: CVMCluster, CVMDiskGroup, CVMMultiNIC, and CVMVolume

    It is advisable to create a service group for each shared disk group.

    Creating a Service group for the Disk Group.

    • Put all CVMDiskGroup resources in the CVM service group
    • the parallel attribute must be set
    • The service group must have the autofailover attribute cleared

    Creating a resource group for a disk group

    • The CVMDiskGroup instance must have localized CVMActivation attribute set to shared-write (sw)
      for each node.
    • The CVMDiskGroup instance must be linked to the CVMCluster resource

    Creating a resource instance for shared volume

    Hardware setup for Cluster File System:

    The nodes in the cluster must have a shared storage device via Fibre Channel I/O.
    In many cases a Fibre Channel switch.
    Each node has the solaris environment with a Fibre HBA attached.

    Install the VCS via installvcs, installvcs does not install the GUIs. The installvcs prompts for the
    license for VCS. A site license can be installed on each node in the cluster. If licensing is
    done after use licensevcs. When installing VCS the system you install from does not need to
    be in the cluster.

    Each system must be on the public network with a unique host name.
    Things you must supply installvcs:

    • Private network interconnect.
    • A name for the cluster
    • A unique cluster ID from 0 - 255
    • The host names of the systems in the cluster
    • Device names of the NICs in the private network.

    Make sure you have root rsh setup on each node using /.rhosts
    Make sure all previous packages have been removed.

    ./installvcs - follow prompts.
    

    The Cluster Web Console and Java Web Console are options while running the installvcs script.

    Check files /etc/llthosts, /etc/llttab, /etc/gabtab, the /etc/VRTS/conf/config/main.cf file

    Runing some status commands:

    # lltstat -n - on sys1 
    # lltstat -n - on sys2
    # lltstat -nvv | more - verbose output, check for OPEN status.
    # lltstat -p - on any system, check connects, which numbers are listed...
    
    # /sbin/gabconfig -a - check gab memberships, a - means communicating and h - means started.
    
    # hastatus -summary - check the status of cluster.
    # hasys -display - on one system to display status, the output should be similar on all systems in cluster.
    ```bash
    
    After VCS is install install CFS using cfsinstall, this will install a number packages needed for CFS, which 
    include vxfs and vxvm. Install patches from the veritas CDROM.
    
    Install the license key, for VxFS cluster functionality:
    ```bash
    # vxlicense -c 
    Please enter you key: 9999 9999.....
    

    The VCS web-based cluster manager lets you monitors the cluster from any browser. You can also use the Java based GUI.

    For Web console:

    http://web_gui_ip:8181/vcs - the IP is the cluster virtual IP address, default login is admin/password, cluster summary view
    eg: 
    http://10.80.24.64:8181/vcs
    

    For Java console:
    This is installed on the PC windows env, on the CDROM insert the Veritas CD goto /ClusterManager, click the
    setup.exe

    The VMSA GUI is started on the master node (UNIX), enter:

    # /opt/VRTSvmsa/bin/vmsa &
    

    This can also be installed on Windows.

    The cluster functionality of Volume manager (CVM) allows multiple hosts to access the same disks. Initially
    vxinstall is run to place disks under VxVM control.

    The CVM and CFS agents manage resources, taking them online, offline, and monitoring resources they
    manage and report changes. To install the agents:

    CVM:

    # /usr/lib/vxvm/bin/vxcvmconfig - follow prompts choose defaults in most cases.
    

    CFS:

    # /opt/VRTSvxfs/lib/vcs/vxcfsconfig - from any node in the cluster
    # /usr/lib/vxvm/bin/vxclustadm nodestate - on each node
    # vxdctl -c mode - determine if the node is master or slave, on each node.
    

    On the master node create a shared disk group:
    In one of two ways:

    # /opt/VRTSvmsa/bin/vmsa & - the GUI.
    

    You can use the script, an example script to create "cfsdg"

    #!/usr/bin/sh -x
    #
    export PATH=$PATH:/usr/sbin
    #
    # Name of the Shared disk group
    #
    shared_dg_name="cfsdg"
    #
    # List of shared devices to be part of the shared disk group
    #
    shared_device_list="c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0
    c1t6d0 c1t8d0 c3t4d0"
    first="yes"
    count=0
    for i in $shared_device_list; do
    /etc/vx/bin/vxdisksetup $i
    vxdisk online $i"s2"
    vxdisk -f init $i"s2"
    count=‘expr $count + 1‘
    if [ $first = "yes" ]; then
    vxdg -s init $shared_dg_name $shared_dg_name$count=$i"s2"
    first="no"
    else
    vxdg -g $shared_dg_name adddisk $shared_dg_name$count=$i"s2"
    fi
    done
    

    Run the script for each shared disk group.

    Create a shared veritas volume in the shared disk group: (It is preferred that VMSA Gui is used to created CFS resources.

    # vxassist -g cfsdg make vol1 20M
    # vxprint
    

    Creating the cluster file system and mounting it in shared mode:

    # mkfs -F vxfs /dev/vx/rdsk/cfsdg/vol1 - if no size is specified the entire volume is used.
    

    Mount the file system with the -o cluster mount option, use the mount directory on each node and the same mount options.

    # mount -F vxfs -o cluster /dev/vx/dsk/cfsdg/vol1 /mnt
    

    Add a node to an existing cluster.

    * Log into the new system as root
    * mount the CD
    * pkgadd -d . VRTSperl VRTSvcs VRTSgab VRTSllt VRTSvcsw VRTSweb VRTSvcsdc
    * Before installation license veritas cluster server using licensevcs.
    * ./installvcs and follow the prompts.
    * create a /etc/llthosts file on the new node and modify the /etc/llthosts on other nodes. To include the new node.
    * create the /etc/llttab file on the new host:
    set-node star35
    set-cluster 2
    link link1 /dev/qfe:0 - ether - -
    link link2 /dev/qfe:1 - ether - -
    start
    * Run the LLT configuration:
    # /sbin/lltconfig -c 
    * create the /etc/gabtab copy from another node and change -n number to add the count for the new node, so if two node
    make it three nodes eg: -n3
    The file should contain: /sbin/gabconfig -c -n 3
    * Run the GAB configuration to make sure the new node is in the membership:
    # /sbin/gabconfig -a
    * On the new node run hastart
    # hastart
    * On each node run the gab configuration to check new node is included.
    # /sbin/gabconfig -a
    

    To install the CFS on the new node:

    * install the CFS packages
    * install the license
    * Run vxinstall to create a rootdg if need be.
    

    To configure the CFS and CVM agents on the new node.

    * Unmount all the VxFS cluster file systems from the orginal nodes.
    * Take the CVM service group offline:
    # hagrp -offline cvm -sys node1
    # hagrp -offline cvm -sys node2
    * Open the configuration file for writing:
    # haconf -makerw
    * Add the new node to the CVM system list and specify failover priority of zero.
    # hagrp -modify cvm SystemList -add node3 0#
    * Add the new node to the autostart list 
    # hagrp -modify cvm AutoStartList node1 node2 node3
    * Add the virtual IP addresses of the CVM link for the new node
    # hares -modify cvm_clus CVMNodeId -add node3 2
    # hares -modify cvm_clus CVMNodeAddr -add node3 169.254.34.2 - the virtual and real IP must be on the same subnet.
    * Configure the real IPs for the private link for the new node.
    # hares -modify cvmmnic CVMAddress -add node3 169.254.65.3
    # hares -modify cvmmnic CVMDevice -add qfe0 169.254.65.102 -sys node3
    # hares -modify cvmmnic CVMDevice -add qfe1 169.254.65.103 -sys node3
    * Write the config to disk
    # haconf -dump -makero
    * Put the CVM resources back online
    # hares -online cvm -sys node1
    # hares -online cvm -sys node2
    # hares -online cvm -sys node3
    * Check status
    # hastatus -sum
    

    The CFS works in terms of master/slave or primary/secondary.
    To show which filesystem is master or slave

    # fsclustadm -v showprimary /mount_point
    

    To set the primary

    # fsclustadm -v setprimary /mount_point
    

    Adding a node in SFCFS 4.1

    * Connect the system to the cluster using the private networks
    * Attach the storage
    * Install the SFCFS packages.
    # ./installsfcfs -installonly	- follow prompts
    * Log into the new system as root
    * mount the CD
    * pkgadd -d . VRTSperl VRTSvcs VRTSgab VRTSllt VRTSvcsw VRTSweb VRTSvcsdc
    * Before installation license veritas cluster server using licensevcs.
    * ./installvcs and follow the prompts.
    * create a /etc/llthosts file on the new node and modify the /etc/llthosts on other nodes. To include the new node.
    * create the /etc/llttab file on the new host:
    set-node star35
    set-cluster 2
    link link1 /dev/qfe:0 - ether - -
    link link2 /dev/qfe:1 - ether - -
    start
    * Run the LLT configuration:
    # /sbin/lltconfig -c 
    * create the /etc/gabtab copy from another node and change -n number to add the count for the new node, so if two node
    make it three nodes eg: -n3
    The file should contain: /sbin/gabconfig -c -n 3
    * Run the GAB configuration to make sure the new node is in the membership:
    # /sbin/gabconfig -a
    * On the new node run hastart
    # hastart
    * On each node run the gab configuration to check new node is included.
    # /sbin/gabconfig -a
    * Verify cluster is running
    # hastatus -summary
    # vxinstall
    # shutdown -y -i6 g0
    

    Configure SFCFS and CVM agents

    * Verify no service groups remain online that depend on CVM, such as SFCFS
    # hagrp -dep cvm
    * If there are dependancies take the nodes offline then take the CVM service group offline
    # hagrp -offline cvm -sys node1
    # hagrp -offline cvm -sys node2
    # hagrp makerw
    * Add new node to the CVM system list and failover priority
    # hagrp -modify cvm SystemList -add node3 0
    # hagrp -modify cvm AutoStartList node1 node2 node3
    * Add new node node3 and its node ID to the cvm_clus resource
    # hares -modify cvm_clus CVMNodeId -add node3 2
    # haconf -dump -makero
    * Run the commands below on existing cluster nodes to enable them to recognise the new node.
    # /etc/vx/bin/vxclustadm -m vcs -t gab reinit (each node)
    * Run the following on a single node to see if the new node is recognised
    # /etc/vx/bin/vxclustadm nidmap
    * Online the nodes
    # hagrp -online cvm -sys node1
    # hagrp -online cvm -sys node2
    # hagrp -online cvm -sys node3
    * You can at this stage configure I/O fencing coordinator disk
    # hastatus -sum
    

    Admin the SFCFS:
    Install the VEA on windows:
    From CD run vrtsobgui.msi

    On UNIX make sure the VRTSobgui is installed.

    # /opt/VRTS/bin/vea &
    Create a shared disk group
    Click Actions>New Disk Group -> New Disk Group wizard appears
    - enter disk group name
    - Click the checkbox for create cluster group
    - From available disk select disks
    - enter names of disks to be added.
    - click checkbox for create a shared disk group
    - activation mode share-write.
    - OK
    - All disk data is DESTROYED, encapsulation needs reboot.
    
    Create a shared file system
    - Click Actions>File system>New File System
    Check cluster-mount
    

    Cluster File System commands.

    • cfscluster
    • cfsdgadm
    • cfsmntadm
    • cfsmount
    • cfsumount
    • fsadm - from primary.
    • fsclustadm
    # cfscluster status
    

    Growing a CFS
    You need to do this from the master node for CVM, as well as the primary for the CFS.

    Do not use /etc/vfstab to see which filesystems to mount after reboot, use VCS configuration file.

    Shared Disk Groups via Volume Manager using CVM.
    The /etc/default/vxdg - has defaults, you can add entries, make this file the same on all nodes. If
    the defaults file is changed while vxconfigd is running it must be restarted to take effect
    enable_activation=true
    default_activation_mode=mode..

    Default mode of activation is shared-write

    To display the activation mode for a shared disk group use vxdg list , you can use
    vxdg to change the activation mode

    Determine if disk is sharable.

    # vxdisk list accessname - accessname eg: c4t1d0
    List shared disk groups
    # vxdg list - you should see enabled, shared
    # vxdg -s list 
    # vxdg list diskgroup
    

    Create a shared disk group - This can only be done on the master node. Be careful the disk is not shared already another shared disk group.

    Create a shared disk group.

    # vxdg -s init diskgroup [diskname=]devicename
    

    Forcibly add a disk to a shared disk group

    # vxdg -f adddisk -g diskgroup [diskname=]devicename
    

    Shared disk groups can only be imported on the master node. For instance if the disk group
    was setup before the software was run.

    # vxdg -s import diskgroup_name	- again -f for force import.
    

    Making a shared disk group, private disk group

    # vxdg deport diskgroup
    

    and then reimport it

    # vxdg import diskgroup
    

    Display the cluster protocol version

    # vxdctl list
    or 
    # vxdctl protocolversion
    # vxdctl support
    # vxdctl protocolrange
    # vxdctl upgrade
    

    Installing and Config the SFCFS for the first time.

    # vxlicinst - installs a license key 
    # vxlicrep - display current licenses
    # vxlictest - retrieves features and their descriptions.
    
    Mount the CD:
    ./installer
    

    Example Creating Disk groups - on master node:

    # cfscluster status
    

    Create new disk group in shared mode.

    # vxdg -s init logdata c4t0s6
    or import existing disk group as shared
    # vxdg -C -s import logdata
    # vxdg list
    # cfsdgadm add logdata all=sw
    # cfsdgadm display
    # cfsdgadm activate logdata
    # cfsdgadm display -v logdata
    # cfsdgadm show_package logdata
    

    Create Volumes

    # vxassist -g logdata make log_files 1024m
    # vxprint
    # newfs -F vxfs /dev/vx/rdsk/logdata/log_files
    Create cluster mount point
    # cfsmntadm add logdata log_files /tmp/logdata/log_files all=rw
    # cfsmntadm display
    # cfsmount /tmp/logdata/log_files
    
    # cfsmntadm add 
    # cfsmntadm add kibbledg kvol01 /kvol01 all=rw
    

Log in to reply
 

© Lightnetics 2024