26 GFS2 #
Global File System 2 or GFS2 is a shared disk file system for Linux computer clusters. GFS2 allows all nodes to have direct concurrent access to the same shared block storage. GFS2 has no disconnected operating mode, and no client or server roles. All nodes in a GFS2 cluster function as peers. GFS2 supports up to 32 cluster nodes. Using GFS2 in a cluster requires hardware to allow access to the shared storage, and a lock manager to control access to the storage.
26.1 GFS2 packages and management utilities #
To use GFS2, make sure gfs2-utils and a matching gfs2-kmp-* package for your Kernel is installed on each node of the cluster.
The gfs2-utils package provides the following utilities for management of GFS2 volumes. For syntax information, see their man pages.
- fsck.gfs2
Checks the file system for errors and optionally repairs errors.
- gfs2_jadd
Adds additional journals to a GFS2 file system.
- gfs2_grow
Grow a GFS2 file system.
- mkfs.gfs2
Create a GFS2 file system on a device, usually a shared device or partition.
- tunegfs2
Allows viewing and manipulating the GFS2 file system parameters such as
UUID,label,lockprotoandlocktable.
26.2 Configuring GFS2 services and a STONITH resource #
Before you can create GFS2 volumes, you must configure DLM and a STONITH resource.
You need to configure a fencing device. Without a STONITH
mechanism (like external/sbd) in place the
configuration fails.
Start a shell and log in as
rootor equivalent.Create an SBD partition as described in Procedure 17.3, “Initializing the SBD devices”.
Run
crm configure.Configure
external/sbdas the fencing device:crm(live)configure#primitive sbd_stonith stonith:external/sbd \ params pcmk_delay_max=30 meta target-role="Started"Review your changes with
show.If everything is correct, submit your changes with
commitand leave the crm live configuration withquit.
For details on configuring the resource for DLM, see Section 24.2, “Configuring DLM cluster resources”.
26.3 Creating GFS2 volumes #
After you have configured DLM as cluster resources as described in Section 26.2, “Configuring GFS2 services and a STONITH resource”, configure your system to use GFS2 and create GFS2 volumes.
We recommend that you generally store application files and data files on different GFS2 volumes. If your application volumes and data volumes have different requirements for mounting, it is mandatory to store them on different volumes.
Before you begin, prepare the block devices you plan to use for your GFS2 volumes. Leave the devices as free space.
Then create and format the GFS2 volume with the
mkfs.gfs2 as described in
Procedure 26.2, “Creating and formatting a GFS2 volume”. The most important parameters for the
command are listed below. For
more information and the command syntax, refer to the
mkfs.gfs2 man page.
- Lock Protocol Name (
-p) The name of the locking protocol to use. Acceptable locking protocols are lock_dlm (for shared storage) or, if you are using GFS2 as a local file system (1 node only), you can specify the lock_nolock protocol. If this option is not specified, lock_dlm protocol is assumed.
- Lock Table Name (
-t) The lock table field appropriate to the lock module you are using. It is clustername:fsname. The clustername value must match that in the cluster configuration file,
/etc/corosync/corosync.conf. Only members of this cluster are permitted to use this file system. The fsname value is a unique file system name used to distinguish this GFS2 file system from others created (1 to 16 characters).- Number of Journals (
-j) The number of journals for gfs2_mkfs to create. You need at least one journal per machine that will mount the file system. If this option is not specified, one journal is created.
Execute the following steps only on one of the cluster nodes.
Open a terminal window and log in as
root.Check if the cluster is online with the command
crm status.Create and format the volume using the
mkfs.gfs2utility. For information about the syntax for this command, refer to themkfs.gfs2man page.For example, to create a new GFS2 file system that supports up to 32 cluster nodes, use the following command:
#mkfs.gfs2 -t CLUSTERNAME:FSNAME -p lock_dlm -j 32 /dev/disk/by-id/DEVICE_IDCLUSTERNAMEmust be the same as the entrycluster_namein the file/etc/corosync/corosync.conf. The default ishacluster.FSNAMEis used to identify this file system and must therefore be unique.Always use a stable device name for devices shared between cluster nodes.
26.4 Mounting GFS2 volumes #
You can either mount a GFS2 volume manually or with the cluster manager, as described in Procedure 26.4, “Mounting a GFS2 volume with the cluster manager”.
To mount multiple GFS2 volumes, see Procedure 26.5, “Mounting multiple GFS2 volumes with the cluster resource manager”.
Open a terminal window and log in as
root.Check if the cluster is online with the command
crm status.Mount the volume from the command line, using the
mountcommand.
If you mount the GFS2 file system manually for testing purposes, make sure to unmount it again before starting to use it via cluster resources.
To mount a GFS2 volume with the High Availability software, configure a file
system resource in the cluster. The following procedure uses the
crm shell to configure the cluster resources.
Alternatively, you can also use Hawk2 to configure the resources as
described in Section 26.5, “Configuring GFS2 resources with Hawk2”.
Start a shell and log in as
rootor equivalent.Run
crm configure.Configure Pacemaker to mount the GFS2 file system on every node in the cluster:
crm(live)configure#primitive gfs2-1 ocf:heartbeat:Filesystem \ params device="/dev/disk/by-id/DEVICE_ID" directory="/mnt/shared" fstype="gfs2" \ op monitor interval="20" timeout="40" \ op start timeout="60" op stop timeout="60" \ meta target-role="Started"Add the
gfs2-1primitive to theg-storagegroup you created in Procedure 24.1, “Configuring a base group for DLM”.crm(live)configure#modgroup g-storage add gfs2-1Because of the base group's internal colocation and ordering, the
gfs2-1resource can only start on nodes that also have adlmresource already running.Important: Do not use a group for multiple GFS2 resourcesAdding multiple GFS2 resources to a group creates a dependency between the GFS2 volumes. For example, if you created a group with
crm configure group g-storage dlm gfs2-1 gfs2-2, then stoppinggfs2-1also stopsgfs2-2, and startinggfs2-2also startsgfs2-1.To use multiple GFS2 resources in the cluster, use colocation and order constraints as described in Procedure 26.5, “Mounting multiple GFS2 volumes with the cluster resource manager”.
Review your changes with
show.If everything is correct, submit your changes with
commitand leave the crm live configuration withquit.
To mount multiple GFS2 volumes in the cluster, configure a file system
resource for each volume, and colocate them with the dlm
resource you created in Procedure 24.2, “Configuring an independent DLM resource”.
Do not add multiple GFS2 resources to a group with DLM.
This creates a dependency between the GFS2 volumes. For example, if
gfs2-1 and gfs2-2 are in the same group,
then stopping gfs2-1 also stops gfs2-2.
Log in to a node as
rootor equivalent.Run
crm configure.Create the primitive for the first GFS2 volume:
crm(live)configure#primitive gfs2-1 Filesystem \ params directory="/srv/gfs2-1" fstype=gfs2 device="/dev/disk/by-id/DEVICE_ID1" \ op monitor interval=20 timeout=40 \ op start timeout=60 interval=0 \ op stop timeout=60 interval=0Create the primitive for the second GFS2 volume:
crm(live)configure#primitive gfs2-2 Filesystem \ params directory="/srv/gfs2-2" fstype=gfs2 device="/dev/disk/by-id/DEVICE_ID2" \ op monitor interval=20 timeout=40 \ op start timeout=60 interval=0 \ op stop timeout=60 interval=0Clone the GFS2 resources so that they can run on all nodes:
crm(live)configure#clone cl-gfs2-1 gfs2-1 meta interleave=truecrm(live)configure#clone cl-gfs2-2 gfs2-2 meta interleave=trueAdd a colocation constraint for both GFS2 resources so that they can only run on nodes where DLM is also running:
crm(live)configure#colocation col-gfs2-with-dlm inf: ( cl-gfs2-1 cl-gfs2-2 ) cl-dlmAdd an order constraint for both GFS2 resources so that they can only start after DLM is already running:
crm(live)configure#order o-dlm-before-gfs2 Mandatory: cl-dlm ( cl-gfs2-1 cl-gfs2-2 )Review your changes with
show.If everything is correct, submit your changes with
commitand leave the crm live configuration withquit.
26.5 Configuring GFS2 resources with Hawk2 #
Instead of configuring the DLM and the file system resource for GFS2 manually with the CRM Shell, you can also use the GFS2 template in Hawk2's .
The GFS2 template in the does not include the configuration of a STONITH resource. If you use the wizard, you still need to create an SBD device on the shared storage and configure a STONITH resource as described in Procedure 26.1, “Configuring a STONITH resource”.
Using the GFS2 template in the Hawk2 also leads to a slightly different resource configuration than the manual configuration described in Procedure 24.1, “Configuring a base group for DLM” and Procedure 26.4, “Mounting a GFS2 volume with the cluster manager”.
Log in to Hawk2:
https://HAWKSERVER:7630/
In the left navigation bar, select › .
Expand the category and select
GFS2 File System (Cloned).Follow the instructions on the screen. If you need information about an option, click it to display a short help text in Hawk2. After the last configuration step, the values you have entered.
The wizard displays the configuration snippet that will be applied to the CIB and any additional changes, if required.
Figure 26.1: Hawk2 summary screen of GFS2 CIB changes #Check the proposed changes. If everything is according to your wishes, apply the changes.
A message on the screen shows if the action has been successful.
26.6 Migrating from OCFS2 to GFS2 #
OCFS2 is deprecated in SUSE Linux Enterprise High Availability 15 SP7. It will not be supported in future releases.
Unlike OCFS2, GFS2 does not support the reflink feature.
This procedure shows one method of migrating from OCFS2 to GFS2. It assumes
you have a single OCFS2 volume that is part of the g-storage group.
The steps for preparing new block storage and backing up data depend on your specific setup. See the relevant documentation if you need more details.
You only need to perform this procedure on one of the cluster nodes.
Thoroughly test this procedure in a test environment before performing it in a production environment.
Prepare a block device for the GFS2 volume.
To check the disk space required, run
df -hon the OCFS2 mount point. For example:#df -h /mnt/shared/Filesystem Size Used Avail Use% Mounted on /dev/sdb 10G 2.3G 7.8G 23% /mnt/sharedMake a note of the disk name under
Filesystem. This will be useful later to help check if the migration worked.Note: OCFS2 disk usageBecause some OCFS2 system files can hold disk space instead of returning it to the global bitmap file, the actual disk usage might be less than the amount shown in the
df -houtput.Install the GFS2 packages on all nodes in the cluster. You can do this on all nodes at once with the following command:
#crm cluster run "zypper install -y gfs2-utils gfs2-kmp-default"Create and format the GFS2 volume using the
mkfs.gfs2utility. For information about the syntax for this command, refer to themkfs.gfs2man page.Tip: Key differences betweenmkfs.ofs2andmkfs.gfs2OCFS2 uses
-Cto specify the cluster size and-bto specify the block size. GFS2 also specifies the block size with-b, but has no cluster size setting so does not use-C.OCFS2 specifies the number of nodes with
-N. GFS2 specifies the number of nodes with-j.
For example, to create a new GFS2 file system that supports up to 32 cluster nodes, use the following command:
#mkfs.gfs2 -t CLUSTERNAME:FSNAME -p lock_dlm -j 32 /dev/disk/by-id/DEVICE_IDCLUSTERNAMEmust be the same as the entrycluster_namein the file/etc/corosync/corosync.conf. The default ishacluster.FSNAMEis used to identify this file system and must therefore be unique.Always use a stable device name for devices shared between cluster nodes.
Put the cluster into maintenance mode:
#crm maintenance onBack up the OCFS2 volume's data.
Start the CRM Shell in interactive mode:
#crm configureDelete the OCFS2 resource:
crm(live)#delete ocfs2-1Mount the GFS2 file system on every node in the cluster, using the same mount point that was used for the OCFS2 file system:
crm(live)#primitive gfs2-1 ocf:heartbeat:Filesystem \ params device="/dev/disk/by-id/DEVICE_ID" directory="/mnt/shared" fstype="gfs2" \ op monitor interval="20" timeout="40" \ op start timeout="60" op stop timeout="60" \ meta target-role="Started"Add the GFS2 primitive to the
g-storagegroup:crm(live)#modgroup g-storage add gfs2-1Review your changes with
show.If everything is correct, submit your changes with
commitand leave the crm live configuration withquit.Take the cluster out of maintenance mode:
#crm maintenance offCheck the status of the cluster, with expanded details about the group
g-storage:#crm status detailThe group should now include the primitive resource
gfs2-1.Run
df -hon the mount point to make sure the disk name changed:#df -h /mnt/shared/Filesystem Size Used Avail Use% Mounted on /dev/sdc 10G 290M 9.8G 3% /mnt/sharedIf the output shows the wrong disk, the new
gfs2-1resource might be restarting. This issue should resolve itself if you wait a short time and then run the command again.Restore the data from the backup to the GFS2 volume.
Note: GFS2 disk usageEven after restoring the data, the GFS2 volume might not use as much disk space as the OCFS2 volume.
To make sure that the data appears correctly, check the contents of the mount point. For example:
#ls -l /mnt/shared/You can also run this command on other nodes to make sure the data is being shared correctly.
If required, you can now remove the OCFS2 disk.
