Clustering Working – Wierdly
I got a working Novell OES Linux cluster after a day or so of fiddling. What I am trying to do is get two blades in an IBM bladecenter clustered for hosting two big GroupWise POAs, which will be stored on two shared volumes in the IBM SAN. First of all, I had to install the QLogic driver for Linux for the Fibrechannel HBAs in the blades to get reliable on-boot multipathing working. I couldn’t get the built-in 2.6 kernel driver in OES Linux to automatically map device-mapper names to the multipath partitions on the SAN without adding the QLogic driver.
Once that was working, I created a Novell Cluster Services (NCS) cluster with the yast NCS tool on one of my nodes, which created a SBD partition on one of my shared volumes. At that point, the enterprise volume management system took over my first partition. I had previously just been using multipath-tools to get names for my partitions, and then mounting them from /dev/disk/by-name, but after installing NCS, the rest of the disk that it installed the SBD partiton on started showing up under /dev/evms. I fiddled for a while to try to get EVMS to manage my other partition too, but every time I rebooted the partition I created in evmsgui would disappear. I ended up deciding to mount my first clustered partition from /dev/evms/, and my second one from /dev/disk/by-name/. It is a bit hokey, but I now have two cluster resources hosting reiserfs partitions in the SAN that automatically fail over and fail back when one or the other cluster nodes goes away.
Next task is to get GroupWise 7 running on these cluster resources.