Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

High Availability Storage [closed]

I would like to make 2 TB or so available via NFS and CIFS. I am looking for a 2 (or more) server solution for high availability and the ability to load balance across the servers if possible. Any suggestions for clustering or high availability solutions?

This is business use, planning on growing to 5-10 TB over next few years. Our facility is almost 24 hours a day, six days a week. We could have 15-30 minutes of downtime, but we want to minimize data loss. I want to minimize 3 AM calls.

We are currently running one server with ZFS on Solaris and we are looking at AVS for the HA part, but we have had minor issues with Solaris (CIFS implementation doesn't work with Vista, etc) that have held us up.

We have started looking at

  • DRDB over GFS (GFS for distributed lock capability)
  • Gluster (needs client pieces, no native CIFS support?)
  • Windows DFS (doc says only replicates after file closes?)

We are looking for a "black box" that serves up data.

We currently snapshot the data in ZFS and send the snapshot over the net to a remote datacenter for offsite backup.

Our original plan was to have a 2nd machine and rsync every 10 - 15 min. The issue on a failure would be that ongoing production processes would lose 15 minutes of data and be left "in the middle". They would almost be easier to start from the beginning than to figure out where to pickup in the middle. That is what drove us to look at HA solutions.

like image 333
petey Avatar asked Dec 30 '22 10:12

petey


1 Answers

I've recently deployed hanfs using DRBD as the backend, in my situation, I'm running active/standby mode, but I've tested it successfully using OCFS2 in primary/primary mode too. There unfortunately isn't much documentation out there on how best to achieve this, most that exists is barely useful at best. If you do go along the drbd route, I highly recommend joining the drbd mailing list, and reading all of the documentation. Here's my ha/drbd setup and script I wrote to handle ha's failures:


DRBD8 is required - this is provided by drbd8-utils and drbd8-source. Once these are installed (I believe they're provided by backports), you can use module-assistant to install it - m-a a-i drbd8. Either depmod -a or reboot at this point, if you depmod -a, you'll need to modprobe drbd.

You'll require a backend partition to use for drbd, do not make this partition LVM, or you'll hit all sorts of problems. Do not put LVM on the drbd device or you'll hit all sorts of problems.

Hanfs1:


/etc/drbd.conf

global {
        usage-count no;
}
common {
        protocol C;
        disk { on-io-error detach; }
}
resource export {
        syncer {
                rate 125M;
        }
        on hanfs2 {
                address         172.20.1.218:7789;
                device          /dev/drbd1;
                disk            /dev/sda3;
                meta-disk       internal;
        }
        on hanfs1 {
                address         172.20.1.219:7789;
                device          /dev/drbd1;
                disk            /dev/sda3;
                meta-disk       internal;
       }
}

Hanfs2's /etc/drbd.conf:


global {
        usage-count no;
}
common {
        protocol C;
        disk { on-io-error detach; }
}
resource export {
        syncer {
                rate 125M;
        }
        on hanfs2 {
                address         172.20.1.218:7789;
                device          /dev/drbd1;
                disk            /dev/sda3;
                meta-disk       internal;
        }
        on hanfs1 {
                address         172.20.1.219:7789;
                device          /dev/drbd1;
                disk            /dev/sda3;
                meta-disk       internal;
       }
}

Once configured, we need to bring up drbd next.

drbdadm create-md export
drbdadm attach export
drbdadm connect export

We must now perform an initial synchronization of data - obviously, if this is a brand new drbd cluster, it doesn't matter which node you choose.

Once done, you'll need to mkfs.yourchoiceoffilesystem on your drbd device - the device in our config above is /dev/drbd1. http://www.drbd.org/users-guide/p-work.html is a useful document to read while working with drbd.

Heartbeat

Install heartbeat2. (Pretty simple, apt-get install heartbeat2).

/etc/ha.d/ha.cf on each machine should consist of:

hanfs1:


logfacility local0
keepalive 2
warntime 10
deadtime 30
initdead 120

ucast eth1 172.20.1.218

auto_failback no

node hanfs1 node hanfs2

hanfs2:


logfacility local0
keepalive 2
warntime 10
deadtime 30
initdead 120

ucast eth1 172.20.1.219

auto_failback no

node hanfs1 node hanfs2

/etc/ha.d/haresources should be the same on both ha boxes:

hanfs1 IPaddr::172.20.1.230/24/eth1
hanfs1  HeartBeatWrapper

I wrote a wrapper script to deal with the idiosyncracies caused by nfs and drbd in a failover scenario. This script should exist within /etc/ha.d/resources.d/ on each machine.



!/bin/bash

heartbeat fails hard.

so this is a wrapper

to get around that stupidity

I'm just wrapping the heartbeat scripts, except for in the case of umount

as they work, mostly

if [[ -e /tmp/heartbeatwrapper ]]; then runningpid=$(cat /tmp/heartbeatwrapper) if [[ -z $(ps --no-heading -p $runningpid) ]]; then echo "PID found, but process seems dead. Continuing." else
echo "PID found, process is alive, exiting."
exit 7
fi
fi

echo $$ > /tmp/heartbeatwrapper

if [[ x$1 == "xstop" ]]; then

/etc/init.d/nfs-kernel-server stop #>/dev/null 2>&1

NFS init script isn't LSB compatible, exit codes are 0 no matter what happens.

Thanks guys, you really make my day with this bullshit.

Because of the above, we just have to hope that nfs actually catches the signal

to exit, and manages to shut down its connections.

If it doesn't, we'll kill it later, then term any other nfs stuff afterwards.

I found this to be an interesting insight into just how badly NFS is written.

sleep 1

#we don't want to shutdown nfs first!
#The lock files might go away, which would be bad.

#The above seems to not matter much, the only thing I've determined
#is that if you have anything mounted synchronously, it's going to break
#no matter what I do.  Basically, sync == screwed; in NFSv3 terms.      
#End result of failing over while a client that's synchronous is that   
#the client hangs waiting for its nfs server to come back - thing doesn't
#even bother to time out, or attempt a reconnect.                        
#async works as expected - it insta-reconnects as soon as a connection seems
#to be unstable, and continues to write data.  In all tests, md5sums have   
#remained the same with/without failover during transfer.                   

#So, we first unmount /export - this prevents drbd from having a shit-fit
#when we attempt to turn this node secondary.                            

#That's a lie too, to some degree. LVM is entirely to blame for why DRBD
#was refusing to unmount.  Don't get me wrong, having /export mounted doesn't
#help either, but still.                                                     
#fix a usecase where one or other are unmounted already, which causes us to terminate early.

if [[ "$(grep -o /varlibnfs/rpc_pipefs /etc/mtab)" ]]; then                                 
    for ((test=1; test <= 10; test++)); do                                                  
        umount /export/varlibnfs/rpc_pipefs  >/dev/null 2>&1                                
        if [[ -z $(grep -o /varlibnfs/rpc_pipefs /etc/mtab) ]]; then                        
            break                                                                           
        fi                                                                                  
        if [[ $? -ne 0 ]]; then                                                             
            #try again, harder this time                                                    
            umount -l /var/lib/nfs/rpc_pipefs  >/dev/null 2>&1                              
            if [[ -z $(grep -o /varlibnfs/rpc_pipefs /etc/mtab) ]]; then                    
                break                                                                       
            fi                                                                              
        fi                                                                                  
    done                                                                                    
    if [[ $test -eq 10 ]]; then                                                             
        rm -f /tmp/heartbeatwrapper                                                         
        echo "Problem unmounting rpc_pipefs"                                                
        exit 1                                                                              
    fi                                                                                      
fi                                                                                          

if [[ "$(grep -o /dev/drbd1 /etc/mtab)" ]]; then                                            
    for ((test=1; test <= 10; test++)); do                                                  
        umount /export  >/dev/null 2>&1                                                     
        if [[ -z $(grep -o /dev/drbd1 /etc/mtab) ]]; then                                   
            break                                                                           
        fi                                                                                  
        if [[ $? -ne 0 ]]; then                                                             
            #try again, harder this time                                                    
            umount -l /export  >/dev/null 2>&1                                              
            if [[ -z $(grep -o /dev/drbd1 /etc/mtab) ]]; then                               
                break                                                                       
            fi                                                                              
        fi                                                                                  
    done                                                                                    
    if [[ $test -eq 10 ]]; then                                                             
        rm -f /tmp/heartbeatwrapper                                                         
        echo "Problem unmount /export"                                                      
        exit 1                                                                              
    fi                                                                                      
fi                                                                                          


#now, it's important that we shut down nfs. it can't write to /export anymore, so that's fine.
#if we leave it running at this point, then drbd will screwup when trying to go to secondary.  
#See contradictory comment above for why this doesn't matter anymore.  These comments are left in
#entirely to remind me of the pain this caused me to resolve.  A bit like why churches have Jesus
#nailed onto a cross instead of chilling in a hammock.                                           

pidof nfsd | xargs kill -9 >/dev/null 2>&1

sleep 1                                   

if [[ -n $(ps aux | grep nfs | grep -v grep) ]]; then
    echo "nfs still running, trying to kill again"   
    pidof nfsd | xargs kill -9 >/dev/null 2>&1       
fi                                                   

sleep 1

/etc/init.d/nfs-kernel-server stop #>/dev/null 2>&1

sleep 1

#next we need to tear down drbd - easy with the heartbeat scripts
#it takes input as resourcename start|stop|status                
#First, we'll check to see if it's stopped                       

/etc/ha.d/resource.d/drbddisk export status >/dev/null 2>&1
if [[ $? -eq 2 ]]; then                                    
    echo "resource is already stopped for some reason..."  
else                                                       
    for ((i=1; i <= 10; i++)); do                          
        /etc/ha.d/resource.d/drbddisk export stop >/dev/null 2>&1
        if [[ $(egrep -o "st:[A-Za-z/]*" /proc/drbd | cut -d: -f2) == "Secondary/Secondary" ]] || [[ $(egrep -o "st:[A-Za-z/]*" /proc/drbd | cut -d: -f2) == "Secondary/Unknown" ]]; then                                                                                                                             
            echo "Successfully stopped DRBD"                                                                                                             
            break                                                                                                                                        
        else                                                                                                                                             
            echo "Failed to stop drbd for some reason"                                                                                                   
            cat /proc/drbd                                                                                                                               
            if [[ $i -eq 10 ]]; then                                                                                                                     
                    exit 50                                                                                                                              
            fi                                                                                                                                           
        fi                                                                                                                                               
    done                                                                                                                                                 
fi                                                                                                                                                       

rm -f /tmp/heartbeatwrapper                                                                                                                              
exit 0                                                                                                                                                   

elif [[ x$1 == "xstart" ]]; then

#start up drbd first
/etc/ha.d/resource.d/drbddisk export start >/dev/null 2>&1
if [[ $? -ne 0 ]]; then                                   
    echo "Something seems to have broken. Let's check possibilities..."
    testvar=$(egrep -o "st:[A-Za-z/]*" /proc/drbd | cut -d: -f2)       
    if [[ $testvar == "Primary/Unknown" ]] || [[ $testvar == "Primary/Secondary" ]]
    then                                                                           
        echo "All is fine, we are already the Primary for some reason"             
    elif [[ $testvar == "Secondary/Unknown" ]] || [[ $testvar == "Secondary/Secondary" ]]
    then                                                                                 
        echo "Trying to assume Primary again"                                            
        /etc/ha.d/resource.d/drbddisk export start >/dev/null 2>&1                       
        if [[ $? -ne 0 ]]; then                                                          
            echo "I give up, something's seriously broken here, and I can't help you to fix it."
            rm -f /tmp/heartbeatwrapper                                                         
            exit 127                                                                            
        fi                                                                                      
    fi                                                                                          
fi                                                                                              

sleep 1                                                                                         

#now we remount our partitions                                                                  

for ((test=1; test <= 10; test++)); do                                                          
    mount /dev/drbd1 /export >/tmp/mountoutput                                                  
    if [[ -n $(grep -o export /etc/mtab) ]]; then                                               
        break                                                                                   
    fi                                                                                          
done                                                                                            

if [[ $test -eq 10 ]]; then                                                                     
    rm -f /tmp/heartbeatwrapper                                                                 
    exit 125                                                                                    
fi                                                                                              


#I'm really unsure at this point of the side-effects of not having rpc_pipefs mounted.          
#The issue here, is that it cannot be mounted without nfs running, and we don't really want to start
#nfs up at this point, lest it ruin everything.                                                     
#For now, I'm leaving mine unmounted, it doesn't seem to cause any problems.                        

#Now we start up nfs.

/etc/init.d/nfs-kernel-server start >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
    echo "There's not really that much that I can do to debug nfs issues."
    echo "probably your configuration is broken.  I'm terminating here."
    rm -f /tmp/heartbeatwrapper
    exit 129
fi

#And that's it, done.

rm -f /tmp/heartbeatwrapper
exit 0

elif [[ "x$1" == "xstatus" ]]; then

#Lets check to make sure nothing is broken.

#DRBD first
/etc/ha.d/resource.d/drbddisk export status >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
    echo "stopped"
    rm -f /tmp/heartbeatwrapper
    exit 3
fi

#mounted?
grep -q drbd /etc/mtab >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
    echo "stopped"
    rm -f /tmp/heartbeatwrapper
    exit 3
fi

#nfs running?
/etc/init.d/nfs-kernel-server status >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
    echo "stopped"
    rm -f /tmp/heartbeatwrapper
    exit 3
fi

echo "running"
rm -f /tmp/heartbeatwrapper
exit 0

fi

With all of the above done, you'll then just want to configure /etc/exports

/export 172.20.1.0/255.255.255.0(rw,sync,fsid=1,no_root_squash)

Then it's just a case of starting up heartbeat on both machines and issuing hb_takeover on one of them. You can test that it's working by making sure the one you issued the takeover on is primary - check /proc/drbd, that the device is mounted correctly, and that you can access nfs.

--

Best of luck man. Setting it up from the ground up was, for me, an extremely painful experience.

like image 92
Tony Dodd Avatar answered Jan 29 '23 23:01

Tony Dodd