Glossary #
General
- Admin node #
The node from which you run the
ceph-deployutility to deploy Ceph on OSD nodes.- Bucket #
A point that aggregates other nodes into a hierarchy of physical locations.
Important: Do Not Mix with S3 Buckets
S3 buckets or containers represent different terms meaning folders for storing objects.
- CRUSH, CRUSH Map #
Controlled Replication Under Scalable Hashing: An algorithm that determines how to store and retrieve data by computing data storage locations. CRUSH requires a map of the cluster to pseudo-randomly store and retrieve data in OSDs with a uniform distribution of data across the cluster.
- Monitor node, MON #
A cluster node that maintains maps of cluster state, including the monitor map, or the OSD map.
- Node #
Any single machine or server in a Ceph cluster.
- OSD #
Depending on context, Object Storage Device or Object Storage Daemon. The
ceph-osddaemon is the component of Ceph that is responsible for storing objects on a local file system and providing access to them over the network.- OSD node #
A cluster node that stores data, handles data replication, recovery, backfilling, rebalancing, and provides some monitoring information to Ceph monitors by checking other Ceph OSD daemons.
- PG #
Placement Group: a sub-division of a pool, used for performance tuning.
- Pool #
Logical partitions for storing objects such as disk images.
- Routing tree #
A term given to any diagram that shows the various routes a receiver can run.
- Rule Set #
Rules to determine data placement for a pool.
Ceph Specific Terms
- Alertmanager #
A single binary which handles alerts sent by the Prometheus server and notifies end user.
- Ceph Storage Cluster #
The core set of storage software which stores the user’s data. Such a set consists of Ceph monitors and OSDs.
AKA “Ceph Object Store”.
- Grafana #
Database analytics and monitoring solution.
- Prometheus #
Systems monitoring and alerting toolkit.