16 NFS Ganesha: Export Ceph Data via NFS #
NFS Ganesha is an NFS server (refer to Sharing File Systems with NFS ) that runs in a user address space instead of as part of the operating system kernel. With NFS Ganesha, you can plug in your own storage mechanism—such as Ceph—and access it from any NFS client.
S3 buckets are exported to NFS on a per-user basis, for example via the path
GANESHA_NODE:/USERNAME/BUCKETNAME.
A CephFS is exported by default via the path
GANESHA_NODE:/cephfs.
Note: NFS Ganesha Performance
Due to increased protocol overhead and additional latency caused by extra network hops between the client and the storage, accessing Ceph via an NFS Gateway may significantly reduce application performance when compared to native CephFS or Object Gateway clients.
16.1 Installation #
For installation instructions, see Book “Deployment Guide”, Chapter 12 “Installation of NFS Ganesha”.
16.2 Configuration #
For a list of all parameters available within the configuration file, see:
man ganesha-configman ganesha-ceph-configfor CephFS File System Abstraction Layer (FSAL) options.man ganesha-rgw-configfor Object Gateway FSAL options.
This section includes information to help you configure the NFS Ganesha server to export the cluster data accessible via Object Gateway and CephFS.
NFS Ganesha configuration is controlled by
/etc/ganesha/ganesha.conf. Note that changes to this
file are overwritten when DeepSea Stage 4 is executed. To persistently
change the settings, edit the file
/srv/salt/ceph/ganesha/files/ganesha.conf.j2 located on
the Salt master.
16.2.1 Export Section #
This section describes how to configure the EXPORT
sections in the ganesha.conf.
EXPORT
{
Export_Id = 1;
Path = "/";
Pseudo = "/";
Access_Type = RW;
Squash = No_Root_Squash;
[...]
FSAL {
Name = CEPH;
}
}16.2.1.1 Export Main Section #
- Export_Id
Each export needs to have a unique 'Export_Id' (mandatory).
- Path
Export path in the related CephFS pool (mandatory). This allows subdirectories to be exported from the CephFS.
- Pseudo
Target NFS export path (mandatory for NFSv4). It defines under which NFS export path the exported data is available.
Example: with the value
/cephfs/and after executingroot #mount GANESHA_IP:/cephfs/ /mnt/The CephFS data is available in the directory
/mnt/cephfs/on the client.- Access_Type
'RO' for read-only access, 'RW' for read-write access, and 'None' for no access.
Tip: Limit Access to Clients
If you leave
Access_Type = RWin the mainEXPORTsection and limit access to a specific client in theCLIENTsection, other clients will be able to connect anyway. To disable access to all clients and enable access for specific clients only, setAccess_Type = Nonein theEXPORTsection and then specify less restrictive access mode for one or more clients in theCLIENTsection:EXPORT { FSAL { access_type = "none"; [...] } CLIENT { clients = 192.168.124.9; access_type = "RW"; [...] } [...] }- Squash
NFS squash option.
- FSAL
Exporting 'File System Abstraction Layer'. See Section 16.2.1.2, “FSAL Subsection”.
16.2.1.2 FSAL Subsection #
EXPORT
{
[...]
FSAL {
Name = CEPH;
}
}- Name
Defines which back-end NFS Ganesha uses. Allowed values are
CEPHfor CephFS orRGWfor Object Gateway. Depending on the choice, arole-mdsorrole-rgwmust be defined in thepolicy.cfg.
16.2.2 RGW Section #
RGW {
ceph_conf = "/etc/ceph/ceph.conf";
name = "name";
cluster = "ceph";
}- ceph_conf
Points to the
ceph.conffile. When deploying with DeepSea, it is not necessary to change this value.- name
The name of the Ceph client user used by NFS Ganesha.
- cluster
Name of the Ceph cluster. SUSE Enterprise Storage 5.5 currently only supports one cluster name, which is
cephby default.
16.2.3 Changing Default NFS Ganesha Ports #
NFS Ganesha uses the port 2049 for NFS and 875 for the rquota support by
default. To change the default port numbers, use the
NFS_Port and RQUOTA_Port options inside
the NFS_CORE_PARAM section, for example:
NFS_CORE_PARAM
{
NFS_Port = 2060;
RQUOTA_Port = 876;
}16.3 Custom NFS Ganesha Roles #
Custom NFS Ganesha roles for cluster nodes can be defined. These roles are
then assigned to nodes in the policy.cfg. The roles
allow for:
Separated NFS Ganesha nodes for accessing Object Gateway and CephFS.
Assigning different Object Gateway users to NFS Ganesha nodes.
Having different Object Gateway users enables NFS Ganesha nodes to access different S3 buckets. S3 buckets can be used for access control. Note: S3 buckets are not to be confused with Ceph buckets used in the CRUSH Map.
16.3.1 Different Object Gateway Users for NFS Ganesha #
The following example procedure for the Salt master shows how to create two
NFS Ganesha roles with different Object Gateway users. In this example, the roles
gold and silver are used, for which
DeepSea already provides example configuration files.
Open the file
/srv/pillar/ceph/stack/global.ymlwith the editor of your choice. Create the file if it does not exist.The file needs to contain the following lines:
rgw_configurations: - rgw - silver - gold ganesha_configurations: - silver - gold
These roles can later be assigned in the
policy.cfg.Create a file
/srv/salt/ceph/rgw/users/users.d/gold.ymland add the following content:- { uid: "gold1", name: "gold1", email: "gold1@demo.nil" }Create a file
/srv/salt/ceph/rgw/users/users.d/silver.ymland add the following content:- { uid: "silver1", name: "silver1", email: "silver1@demo.nil" }Now, templates for the
ganesha.confneed to be created for each role. The original template of DeepSea is a good start. Create two copies:root #cd/srv/salt/ceph/ganesha/files/root #cpganesha.conf.j2 silver.conf.j2root #cpganesha.conf.j2 gold.conf.j2The new roles require keyrings to access the cluster. To provide access, copy the
ganesha.j2:root #cpganesha.j2 silver.j2root #cpganesha.j2 gold.j2Copy the keyring for the Object Gateway:
root #cd/srv/salt/ceph/rgw/files/root #cprgw.j2 silver.j2root #cprgw.j2 gold.j2Object Gateway also needs the configuration for the different roles:
root #cd/srv/salt/ceph/configuration/files/root #cpceph.conf.rgw silver.confroot #cpceph.conf.rgw gold.confAssign the newly created roles to cluster nodes in the
/srv/pillar/ceph/proposals/policy.cfg:role-silver/cluster/NODE1.sls role-gold/cluster/NODE2.sls
Replace NODE1 and NODE2 with the names of the nodes to which you want to assign the roles.
Execute DeepSea Stages 0 to 4.
16.3.2 Separating CephFS and Object Gateway FSAL #
The following example procedure for the Salt master shows how to create 2 new different roles that use CephFS and Object Gateway:
Open the file
/srv/pillar/ceph/rgw.slswith the editor of your choice. Create the file if it does not exist.The file needs to contain the following lines:
rgw_configurations: ganesha_cfs: users: - { uid: "demo", name: "Demo", email: "demo@demo.nil" } ganesha_rgw: users: - { uid: "demo", name: "Demo", email: "demo@demo.nil" } ganesha_configurations: - ganesha_cfs - ganesha_rgwThese roles can later be assigned in the
policy.cfg.Now, templates for the
ganesha.confneed to be created for each role. The original template of DeepSea is a good start. Create two copies:root #cd/srv/salt/ceph/ganesha/files/root #cpganesha.conf.j2 ganesha_rgw.conf.j2root #cpganesha.conf.j2 ganesha_cfs.conf.j2Edit the
ganesha_rgw.conf.j2and remove the section:{% if salt.saltutil.runner('select.minions', cluster='ceph', roles='mds') != [] %} [...] {% endif %}Edit the
ganesha_cfs.conf.j2and remove the section:{% if salt.saltutil.runner('select.minions', cluster='ceph', roles=role) != [] %} [...] {% endif %}The new roles require keyrings to access the cluster. To provide access, copy the
ganesha.j2:root #cpganesha.j2 ganesha_rgw.j2root #cpganesha.j2 ganesha_cfs.j2The line
caps mds = "allow *"can be removed from theganesha_rgw.j2.Copy the keyring for the Object Gateway:
root #cp/srv/salt/ceph/rgw/files/rgw.j2 \ /srv/salt/ceph/rgw/files/ganesha_rgw.j2Object Gateway needs the configuration for the new role:
root #cp/srv/salt/ceph/configuration/files/ceph.conf.rgw \ /srv/salt/ceph/configuration/files/ceph.conf.ganesha_rgwAssign the newly created roles to cluster nodes in the
/srv/pillar/ceph/proposals/policy.cfg:role-ganesha_rgw/cluster/NODE1.sls role-ganesha_cfs/cluster/NODE1.sls
Replace NODE1 and NODE2 with the names of the nodes to which you want to assign the roles.
Execute DeepSea Stages 0 to 4.
16.3.3 Supported Operations #
The RGW NFS interface supports most operations on files and directories, with the following restrictions:
Links including symbolic links are not supported.
NFS access control lists (ACLs) are not supported. Unix user and group ownership and permissions are supported.
Directories may not be moved or renamed. You may move files between directories.
Only full, sequential write I/O is supported. Therefore, write operations are forced to be uploads. Many typical I/O operations, such as editing files in place, will necessarily fail as they perform non-sequential stores. There are file utilities that apparently write sequentially (for example some versions of GNU
tar), but may fail due to infrequent non-sequential stores. When mounting via NFS, application's sequential I/O can generally be forced to sequential writes to the NFS server via synchronous mounting (the-o syncoption). NFS clients that cannot mount synchronously (for example Microsoft Windows*) will not be able to upload files.NFS RGW supports read-write operations only for block size smaller than 4MB.
16.4 Starting or Restarting NFS Ganesha #
To enable and start the NFS Ganesha service, run:
root #systemctlenable nfs-ganesharoot #systemctlstart nfs-ganesha
Restart NFS Ganesha with:
root #systemctlrestart nfs-ganesha
When NFS Ganesha is started or restarted, it has a grace timeout of 90 seconds for NFS v4. During the grace period, new requests from clients are actively rejected. Hence, clients may face a slowdown of requests when NFS is in grace state.
16.5 Setting the Log Level #
You change the default debug level NIV_EVENT by editing
the file /etc/sysconfig/nfs-ganesha. Replace
NIV_EVENT with NIV_DEBUG or
NIV_FULL_DEBUG. Increasing the log verbosity can produce
large amounts of data in the log files.
OPTIONS="-L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT"
A restart of the service is required when changing the log level.
Note
NFS Ganesha uses Ceph client libraries to connect to the Ceph
cluster. By default, client libraries do not log errors or any other
output. To see more details about NFS Ganesha interacting with the
Ceph cluster (for example, connection issues details) logging needs
to be explicitly defined in the ceph.conf configuration file
under the [client] section. For example:
[client] log_file = "/var/log/ceph/ceph-client.log"
16.6 Verifying the Exported NFS Share #
When using NFS v3, you can verify whether the NFS shares are exported on the NFS Ganesha server node:
root #showmount-e / (everything)
16.7 Mounting the Exported NFS Share #
To mount the exported NFS share (as configured in Section 16.2, “Configuration”) on a client host, run:
root #mount-t nfs -o rw,noatime,sync \ nfs_ganesha_server_hostname:/ /path/to/local/mountpoint
16.8 Additional Resources #
The original NFS Ganesha documentation can be found at https://github.com/nfs-ganesha/nfs-ganesha/wiki/Docs.