13 Exporting Ceph Data via Samba #
This chapter describes how to export data stored in a Ceph cluster via a Samba/CIFS share so that you can easily access them from Windows* client machines. It also includes information that will help you configure a Ceph Samba gateway to join Active Directory in the Windows* domain to authenticate and authorize users.
Note: Samba Gateway Performance
Because of increased protocol overhead and additional latency caused by extra network hops between the client and the storage, accessing CephFS via a Samba Gateway may significantly reduce application performance when compared to native Ceph clients.
13.1 Export CephFS via Samba Share #
Warning: Cross Protocol Access
Native CephFS and NFS clients are not restricted by file locks obtained via Samba, and vice versa. Applications that rely on cross protocol file locking may experience data corruption if CephFS backed Samba share paths are accessed via other means.
13.1.1 Samba Related Packages Installation #
To configure and export a Samba share, the following packages need to be installed: samba-ceph and samba-winbind. If these packages are not installed, install them:
cephadm@smb > zypper install samba-ceph samba-winbind13.1.2 Single Gateway Example #
In preparation for exporting a Samba share, choose an appropriate node to act as a Samba Gateway. The node needs to have access to the Ceph client network, as well as sufficient CPU, memory, and networking resources.
Failover functionality can be provided with CTDB and the SUSE Linux Enterprise High Availability Extension. Refer to Section 13.1.3, “High Availability Configuration” for more information on HA setup.
Make sure that a working CephFS already exists in your cluster. For details, see Chapter 11, Installation of CephFS.
Create a Samba Gateway specific keyring on the Ceph admin node and copy it to both Samba Gateway nodes:
cephadm >cephauth get-or-create client.samba.gw mon 'allow r' \ osd 'allow *' mds 'allow *' -o ceph.client.samba.gw.keyringcephadm >scpceph.client.samba.gw.keyring SAMBA_NODE:/etc/ceph/Replace SAMBA_NODE with the name of the Samba gateway node.
The following steps are executed on the Samba Gateway node. Install Samba together with the Ceph integration package:
cephadm@smb >sudo zypper in samba samba-cephReplace the default contents of the
/etc/samba/smb.conffile with the following:[global] netbios name = SAMBA-GW clustering = no idmap config * : backend = tdb2 passdb backend = tdbsam # disable print server load printers = no smbd: backgroundqueue = no [SHARE_NAME] path = / vfs objects = ceph ceph: config_file = /etc/ceph/ceph.conf ceph: user_id = samba.gw read only = no oplocks = no kernel share modes = no
Tip: Oplocks and Share Modes
oplocks(also known as SMB2+ leases) allow for improved performance through aggressive client caching, but are currently unsafe when Samba is deployed together with other CephFS clients, such as kernelmount.ceph, FUSE, or NFS Ganesha.Currently
kernel share modesneeds to be disabled in a share running with the CephFS vfs module for file serving to work properly.Important: Permitting Access
Since
vfs_cephdoes not require a file system mount, the share path is interpreted as an absolute path within the Ceph file system on the attached Ceph cluster. For successful share I/O, the path's access control list (ACL) needs to permit access from the mapped user for the given Samba client. You can modify the ACL by temporarily mounting via the CephFS kernel client and using thechmod,chownorsetfaclutilities against the share path. For example, to permit access for all users, run:root #chmod 777 MOUNTED_SHARE_PATHStart and enable the Samba daemon:
cephadm@smb >sudo systemctl start smb.servicecephadm@smb >sudo systemctl enable smb.servicecephadm@smb >sudo systemctl start nmb.servicecephadm@smb >sudo systemctl enable nmb.service
13.1.3 High Availability Configuration #
Important: Transparent Failover Not Supported
Although a multi-node Samba + CTDB deployment is more highly available compared to the single node (see Chapter 13, Exporting Ceph Data via Samba), client-side transparent failover is not supported. Applications will likely experience a short outage on Samba Gateway node failure.
This section provides an example of how to set up a two-node high
availability configuration of Samba servers. The setup requires the SUSE Linux Enterprise
High Availability Extension. The two nodes are called earth
(192.168.1.1) and mars
(192.168.1.2).
For details about SUSE Linux Enterprise High Availability Extension, see https://documentation.suse.com/sle-ha/12-SP5/.
Additionally, two floating virtual IP addresses allow clients to connect to
the service no matter which physical node it is running on.
192.168.1.10 is used for cluster
administration with Hawk2 and
192.168.2.1 is used exclusively
for the CIFS exports. This makes it easier to apply security restrictions
later.
The following procedure describes the example installation. More details can be found at https://documentation.suse.com/sle-ha/12-SP5/single-html/SLE-HA-install-quick/.
Create a Samba Gateway specific keyring on the Admin Node and copy it to both nodes:
cephadm >cephauth get-or-create client.samba.gw mon 'allow r' \ osd 'allow *' mds 'allow *' -o ceph.client.samba.gw.keyringcephadm >scpceph.client.samba.gw.keyringearth:/etc/ceph/cephadm >scpceph.client.samba.gw.keyringmars:/etc/ceph/SLE-HA setup requires a fencing device to avoid a split brain situation when active cluster nodes become unsynchronized. For this purpose, you can use a Ceph RBD image with Stonith Block Device (SBD). Refer to https://documentation.suse.com/sle-ha/12-SP5/single-html/SLE-HA-guide/#sec-ha-storage-protect-fencing-setup for more details.
If it does not yet exist, create an RBD pool called
rbd(see Book “Administration Guide”, Chapter 8 “Managing Storage Pools”, Section 8.2.2 “Create a Pool”) and associate it withrbd(see Book “Administration Guide”, Chapter 8 “Managing Storage Pools”, Section 8.1 “Associate Pools with an Application”). Then create a related RBD image calledsbd01:cephadm >ceph osd pool create rbd PG_NUM PGP_NUM replicatedcephadm >ceph osd pool application enable rbd rbdcephadm >rbd -p rbd create sbd01 --size 64M --image-sharedPrepare
earthandmarsto host the Samba service:Make sure the following packages are installed before you proceed: ctdb, tdb-tools, and samba (needed for
smb.serviceandnmb.service).cephadm@smb >zypperin ctdb tdb-tools samba samba-cephMake sure the services
ctdb,smb, andnmbare stopped and disabled:cephadm@smb >sudo systemctl disable ctdbcephadm@smb >sudo systemctl disable smbcephadm@smb >sudo systemctl disable nmbcephadm@smb >sudo systemctl stop smbcephadm@smb >sudo systemctl stop nmbOpen port
4379of your firewall on all nodes. This is needed for CTDB to communicate with other cluster nodes.
On
earth, create the configuration files for Samba. They will later automatically synchronize tomars.Insert a list of private IP addresses of Samba Gateway nodes in the
/etc/ctdb/nodesfile. Find more details in the ctdb manual page (man 7 ctdb).192.168.1.1 192.168.1.2
Configure Samba. Add the following lines in the
[global]section of/etc/samba/smb.conf. Use the host name of your choice in place of CTDB-SERVER (all nodes in the cluster will appear as one big node with this name). Add a share definition as well, consider SHARE_NAME as an example:[global] netbios name = SAMBA-HA-GW clustering = yes idmap config * : backend = tdb2 passdb backend = tdbsam ctdbd socket = /var/lib/ctdb/ctdb.socket # disable print server load printers = no smbd: backgroundqueue = no [SHARE_NAME] path = / vfs objects = ceph ceph: config_file = /etc/ceph/ceph.conf ceph: user_id = samba.gw read only = no oplocks = no kernel share modes = no
Note that the
/etc/ctdb/nodesand/etc/samba/smb.conffiles need to match on all Samba Gateway nodes.
Install and bootstrap the SUSE Linux Enterprise High Availability cluster.
Register the SUSE Linux Enterprise High Availability Extension on
earthandmars:root@earth #SUSEConnect-r ACTIVATION_CODE -e E_MAILroot@mars #SUSEConnect-r ACTIVATION_CODE -e E_MAILInstall ha-cluster-bootstrap on both nodes:
root@earth #zypperin ha-cluster-bootstraproot@mars #zypperin ha-cluster-bootstrapMap the RBD image
sbd01on both Samba Gateways viarbdmap.service.Edit
/etc/ceph/rbdmapand add an entry for the SBD image:rbd/sbd01 id=samba.gw,keyring=/etc/ceph/ceph.client.samba.gw.keyring
Enable and start
rbdmap.service:root@earth #systemctl enable rbdmap.service && systemctl start rbdmap.serviceroot@mars #systemctl enable rbdmap.service && systemctl start rbdmap.serviceThe
/dev/rbd/rbd/sbd01device should be available on both Samba Gateways.Initialize the cluster on
earthand letmarsjoin it.root@earth #ha-cluster-initroot@mars #ha-cluster-join-c earthImportant
During the process of initialization and joining the cluster, you will be interactively asked whether to use SBD. Confirm with
yand then specify/dev/rbd/rbd/sbd01as a path to the storage device.
Check the status of the cluster. You should see two nodes added in the cluster:
root@earth #crmstatus 2 nodes configured 1 resource configured Online: [ earth mars ] Full list of resources: admin-ip (ocf::heartbeat:IPaddr2): Started earthExecute the following commands on
earthto configure the CTDB resource:root@earth #crmconfigurecrm(live)configure#primitivectdb ocf:heartbeat:CTDB params \ ctdb_manages_winbind="false" \ ctdb_manages_samba="false" \ ctdb_recovery_lock="!/usr/lib64/ctdb/ctdb_mutex_ceph_rados_helper ceph client.samba.gw cephfs_metadata ctdb-mutex" ctdb_socket="/var/lib/ctdb/ctdb.socket" \ op monitor interval="10" timeout="20" \ op start interval="0" timeout="200" \ op stop interval="0" timeout="100"crm(live)configure#primitivenmb systemd:nmb \ op start timeout="100" interval="0" \ op stop timeout="100" interval="0" \ op monitor interval="60" timeout="100"crm(live)configure#primitivesmb systemd:smb \ op start timeout="100" interval="0" \ op stop timeout="100" interval="0" \ op monitor interval="60" timeout="100"crm(live)configure#groupg-ctdb ctdb nmb smbcrm(live)configure#clonecl-ctdb g-ctdb meta interleave="true"crm(live)configure#commitThe binary
/usr/lib64/ctdb/ctdb_mutex_ceph_rados_helperin the configuration optionctdb_recovery_lockhas the parameters CLUSTER_NAME, CEPHX_USER, RADOS_POOL, and RADOS_OBJECT, in this order.An extra lock-timeout parameter can be appended to override the default value used (10 seconds). A higher value will increase the CTDB recovery master failover time, whereas a lower value may result in the recovery master being incorrectly detected as down, triggering flapping failovers.
Add a clustered IP address:
crm(live)configure#primitiveip ocf:heartbeat:IPaddr2 params ip=192.168.2.1 \ unique_clone_address="true" \ op monitor interval="60" \ meta resource-stickiness="0"crm(live)configure#clonecl-ip ip \ meta interleave="true" clone-node-max="2" globally-unique="true"crm(live)configure#colocationcol-with-ctdb 0: cl-ip cl-ctdbcrm(live)configure#ordero-with-ctdb 0: cl-ip cl-ctdbcrm(live)configure#commitIf
unique_clone_addressis set totrue, the IPaddr2 resource agent adds a clone ID to the specified address, leading to three different IP addresses. These are usually not needed, but help with load balancing. For further information about this topic, see https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#cha-ha-lb.Check the result:
root@earth #crmstatus Clone Set: base-clone [dlm] Started: [ factory-1 ] Stopped: [ factory-0 ] Clone Set: cl-ctdb [g-ctdb] Started: [ factory-1 ] Started: [ factory-0 ] Clone Set: cl-ip [ip] (unique) ip:0 (ocf:heartbeat:IPaddr2): Started factory-0 ip:1 (ocf:heartbeat:IPaddr2): Started factory-1Test from a client machine. On a Linux client, run the following command to see if you can copy files from and to the system:
root #smbclient//192.168.2.1/myshare