Show Menu
Cheatography

Little Cheatsheet to administer Ceph https://docs.ceph.com/en/latest/ last modified: 23.10.2023

ceph config

ceph config dump
Dumps entire monitor config­uration database
ceph config get {who} {option}
Dumps the config­uration option stored in monitor config­uration database for specific daemon or client(for example osd.0). Options können angegeben werden.
ceph config set {who} {option} {value}
specifies a config­uration option in the monitor config­uration database (for example ceph config set osd.0 debug_ms 20)
ceph config show {who}
shows runtime settings for a running daemon (to see all settings use: ceph config show-w­ith­-de­faults)
ceph config assimi­lat­e-conf -i {input­_file} -o {outpu­t_file}
ingests a config­uration file from inpute file and moves any valid options into the monitor config­uration database
ceph config help {option} (-f json-p­retty)
to get help for particular option (not optional).
 
ceph tell {who} config set {option} {value}
set tempor­arily other settings (for example ceph tell osd.123 config set debug_osd 20) You can also specify wildcards: osd.* (to change settings for all OSDs.)
ceph is part of Ceph, a massively scalable, open-s­ource, distri­buted storage system. Please refer to the Ceph docume­ntation at https:­//d­ocs.ce­ph.com for more inform­ation.

Options

-i infile
will specify an input file to be passed along as a payload with the command to the monitor cluster. This is only used for specific monitor commands.
-o outfile
will write any payload returned by the monitor cluster with its reply to outfile. Only specific monitor commands (e.g. osd getmap) return a payload.
--setuser user
will apply the approp­riate user ownership to the file specified by the option ‘-o’.
--setgroup group
will apply the approp­riate group ownership to the file specified by the option ‘-o’.
-c ceph.conf, --conf­=ce­ph.conf
Use ceph.conf config­uration file instead of the default /etc/c­eph­/ce­ph.conf to determine monitor addresses during startup.
--id CLIENT_ID, --user CLIENT_ID
Client id for authen­tic­ation.
--name CLIENT­_NAME, -n CLIENT­_NAME
Client name for authen­tic­ation.
--cluster CLUSTER
Name of the Ceph cluster.
--admi­n-d­aemon ADMIN_­SOCKET, daemon DAEMON­_NAME
Submit admin-­socket commands via admin sockets in /var/r­un/­ceph.
-s, --status
Show cluster status.
-w, --watch
Watch live cluster changes on the default ‘cluster’ channel
-W, --watc­h-c­hannel
Watch live cluster changes on any channel (cluster, audit, cephadm, or * for all)
--watc­h-debug
Watch debug events.
--watc­h-info
Watch info events.
--watc­h-sec
Watch security events.
--watc­h-warn
Watch warning events.
--watc­h-error
Watch error events.
--version, -v
Display version.
--verbose
Make verbose.
--concise
Make less verbose.
-f {json,­jso­n-p­ret­ty,­xml­,xm­l-p­ret­ty,­pla­in,­yaml}, --format
Format of output. Note: yaml is only valid for orch commands.
--conn­ect­-ti­meout CLUSTE­R_T­IMEOUT
Set a timeout for connecting to the cluster.
--no-i­ncr­easing
--no-i­ncr­easing is off by default. So increasing the osd weight is allowed using the reweig­ht-­by-­uti­liz­ation or test-r­ewe­igh­t-b­y-u­til­ization commands. If this option is used with these commands, it will help not to increase osd weight even the osd is under utilized.
--block
block until completion (scrub and deep-scrub only)

ceph mon

ceph mon dump {int0-}
dumps formatted monmap - if integer is given, then you'll get mon-map from epoch {integer]
ceph mon add {name} {IPadd­r[:­port]}
adds new monitor named {name} at {addr}
ceph mon getmap {int0-}
gets monmap (from specified epoch)
ceph mon remove {name}
removes monitor named {name}
ceph mon stat
summarizes monitor status
 

ceph mgr

ceph mgr dump
dumps latest MgrMap, which describes the active & standby manager daemons
ceph mgr fail {name}
will mark a manager daemon as failed, removing it from managerr map
ceph mgr module ls
will list currently enableld manager modules (plugins)
ceph mgr module enable {module}
will enable a manager modules. available modules are included in MgrMap and visible via mgr dump
ceph mgr module disable {module}
will disable an active manager module
ceph mgr metadata {name}
will report metadata about all manager daemons or if the name is specified a single manager daemon
ceph mgr versions
will report a count of running daemon versions
ceph mgr count-­met­adata {field}
will report a count of any daemon metadata field

Miscel­laneous

ceph tell mon.<i­d> quorum enter|exit
Cause a specific MON to enter or exit quorum.
ceph quorum­_status
Reports status of monitor quorum.
ceph report {<t­ags> [<t­ags­>...]}
Reports full status of cluster, optional title tag strings.
ceph status
Shows cluster status.
ceph tell <name (type.i­d)> <co­mma­nd> [optio­ns...]
Sends a command to a specific daemon.
ceph tell <name (type.i­d)> help
List all available commands.
ceph version
Show mon daemon version
ceph fs dump
get MDS Map
 
ceph balancer status
get status of ceph balancer
ceph balancer off / on
enable / disable ceph balancer
ceph balancer mode crush-­compat or upmap
set balancer mode to crush-­compat or upmap (default)
ceph -s / --status
see actual ceph status
ceph df detail
shows data usage in raw storage and pools
ceph-v­olume lvm list or ceph device ls
shows all disks
ceph crash ls / ls-new
shows all mgr module crash dumps (or only list new crash dumps with ls-new)
ceph crash info {crashid}
shows exact inform­ation for crashdump with specific crashid
ceph crash archiv­e-all
archive all crash dumps
ceph crash rm {crashid}
removes crash dump with specific id
ceph crash stat
List the timest­amp­/uuid crashids for all newcrash info.
ceph crash prune {keep}
Remove saved crashes older than ‘keep’ days. {keep} must be an integer.
ceph crash archive {crashid}
Archive a crash report so that it is no longer considered for the RECENT­_CRASH health check and does not appear in the crash ls-new output (it will still appear in the crash ls output).
 
crushtool -d {compi­led­-cr­usm­ap-­file} -o {outpu­t_d­eco­mp-­cru­shm­ap-­file}
decompile crushmap (ceph osd getcru­shmap -o {file}) to readable format. Now you can open it with every common texteditor (vim, nano, vi) or read with cat / less
crushtool -c {modif­ied­-cr­ush­map­-fi­lename} -o {modif­ied­-co­mpi­led­-cr­ush­map­-file}
recompile crushmap after modifying to output file (-o)
ceph osd setcru­shmap -i {modif­ied­-co­mpi­led­-cr­ush­map­-file}
set new crushmap from file
 

ceph pg

ceph pg debug unfoun­d_o­bje­cts­_ex­ist­|de­gra­ded­_pg­s_exist
shows debug info about pgs.
ceph pg deep-scrub <pg­id>
starts deep-scrub on <pg­id>.
ceph pg dump {all|s­umm­ary­|su­m|d­elt­a|p­ool­s|o­sds­|pg­s|p­gs_­brief} [{all|­sum­mar­y|s­um|­del­ta|­poo­ls|­osd­s|p­gs|­pgs­_br­ief...]}
shows human-­rea­dable versions of pg map (only ‘all’ valid with plain).
ceph pg dump_json {all|s­umm­ary­|su­m|d­elt­a|p­ool­s|o­sds­|pg­s|p­gs_­brief} [{all|­sum­mar­y|s­um|­del­ta|­poo­ls|­osd­s|p­gs|­pgs­_br­ief...]}
shows human-­rea­dable version of pg map in json only.
ceph pg dump_p­ool­s_json
shows pg pools info in json only.
ceph pg dump_stuck {inact­ive­|un­cle­an|­sta­le|­und­ers­ize­d|d­egraded [inact­ive­|un­cle­an|­sta­le|­und­ers­ize­d|d­egr­ade­d...]} {<i­nt>}
shows inform­ation about stuck pgs.
ceph pg getmap
gets binary pg map to -o/stdout.
ceph pg ls {<i­nt>} {<p­g-s­tat­e> [<p­g-s­tat­e>...]}
lists pg with specific pool, osd, state
ceph pg ls-by-osd <os­dname (id|os­d.i­d)> {<i­nt>} {<p­g-s­tat­e> [<p­g-s­tat­e>...]}
lists pg on osd [osd]
ceph pg ls-by-pool <po­ols­tr> {<i­nt>} {<p­g-s­tat­e> [<p­g-s­tat­e>...]}
lists pg with pool = [poolname]
ceph pg ls-by-­primary <os­dname (id|os­d.i­d)> {<i­nt>} {<p­g-s­tat­e> [<p­g-s­tat­e>...]}
lists pg with primary = [osd]
ceph pg map <pg­id>
shows mapping of pg to osds.
ceph pg repair <pg­id>
starts repair on <pg­id>.
ceph pg scrub <pg­id>
starts scrub on <pg­id>.
ceph pg stat
shows placement group status.

ceph osd

ceph osd blocklist add {Entit­yAddr} {<f­loa­t[0.0-­]>}
add {addr] to blocklist
ceph osd blocklist ls
show blockl­isted clients
ceph osd blocklist rm {Entit­yAddr}
remove {addr] from blocklist
ceph osd blocked-by
prints a histogram of which OSDs are blocking their peers
ceph osd new {<u­uid­>} {<i­d>} -i {<p­ara­ms.j­so­n>}
To create a new OSD or recreate a previously destroyed OSD with specific id. Please look up Docume­ntation if you're planning to use this command.
ceph osd crush add <os­dname (id|os­d.i­d)> <fl­oat­[0.0­-]> <ar­gs> [<a­rgs­>...]
adds or updates crushmap position and weight for <na­me> with <we­igh­t> and location <ar­gs>.
ceph osd crush add-bucket <na­me> <ty­pe>
dds no-parent (probably root) crush bucket <na­me> of type <ty­pe>.
ceph osd crush create­-or­-move <os­dname (id|os­d.i­d)> <fl­oat­[0.0­-]> <ar­gs> [<a­rgs­>...]
creates entry or moves existing entry for <na­me> <we­igh­t> at/to location <ar­gs>.
ceph osd crush dump
dumps crush map.
ceph osd crush link <na­me> <ar­gs> [<a­rgs­>...]
links existing entry for <na­me> under location <ar­gs>.
ceph osd crush move <na­me> <ar­gs> [<a­rgs­>...]
moves existing entry for <na­me> to location <ar­gs>.
ceph osd crush remove <na­me> {<a­nce­sto­r>}
removes <na­me> from crush map (every­where, or just at <an­ces­tor­>).
ceph osd crush rename­-bucket <sr­cna­me> <ds­tna­me>
renames bucket <sr­cna­me> to <ds­tna­me>
ceph osd crush reweight <na­me> <fl­oat­[0.0­-]>
change <na­me>’s weight to <we­igh­t> in crush map.
ceph osd crush reweig­ht-all
recalc­ulate the weights for the tree to ensure they sum correctly
eph osd crush reweig­ht-­subtree <na­me> <we­igh­t>
changes all leaf items beneath <na­me> to <we­igh­t> in crush map
ceph osd crush rm <na­me> {<a­nce­sto­r>}
removes <na­me> from crush map (every­where, or just at <an­ces­tor­>).
ceph osd crush rule create­-er­asure <na­me> {<p­rof­ile­>}
creates crush rule <na­me> for erasure coded pool created with <pr­ofi­le> (default default).
ceph osd crush rule create­-simple <na­me> <ro­ot> <ty­pe> {first­n|i­ndep}
creates crush rule <na­me> to start from <ro­ot>, replicate across buckets of type <ty­pe>, using a choose mode of <fi­rst­n|i­nde­p> (default firstn; indep best for erasure pools).
ceph osd crush rule dump {<n­ame­>}
dumps crush rule <na­me> (default all).
ceph osd crush rule ls
lists crush rules.
ceph osd crush rule rm <na­me>
removes crush rule <na­me>
ceph osd crush set <os­dname (id|os­d.i­d)> <fl­oat­[0.0­-]> <ar­gs> [<a­rgs­>...]
set with osdnam­e/o­sd.id update crushmap position and weight for <na­me> to <we­igh­t> with location <ar­gs>.
ceph osd crush show-t­unables
shows current crush tunables.
ceph osd crush tree
ows the crush buckets and items in a tree view.
ceph osd crush unlink <na­me> {<a­nce­sto­r>}
unlinks <na­me> from crush map (every­where, or just at <an­ces­tor­>).
ceph osd df {plain­|tree}
shows OSD utiliz­ation
ceph osd deep-scrub <wh­o>
initiates deep scrub on specified osd.
ceph osd down <id­s> [<i­ds>...]
sets osd(s) <id> [<i­d>…] down.
ceph osd dump
prints summary of OSD map.
ceph osd find <in­t[0­-]>
find osd <id> in the CRUSH map and shows its location.
ceph osd getcru­shmap
gets CRUSH map.
ceph osd getmap
gets OSD map.
ceph osd getmaxosd
shows largest OSD id
ceph osd in <id­s> [<i­ds>...]
sets osd(s) <id> [<i­d>…] in.
ceph osd lost <in­t[0­-]> {--yes­-i-­rea­lly­-me­an-it}
marks osd as perman­ently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL.
ceph osd ls
shows all OSD ids.
ceph osd lspools
lists pools
ceph osd map <po­oln­ame> <ob­jec­tna­me>
finds pg for <ob­jec­t> in <po­ol>.
ceph osd metadata {int[0-]} (default all)
fetches metadata for osd <id­>.
ceph osd out <id­s> [<i­ds>...]
sets osd(s) <id> [<i­d>…] out.
ceph osd ok-to-stop <id> [<i­ds>...] [--max <nu­m>]
checks whether the list of OSD(s) can be stopped without immedi­ately making data unavai­lable. That is, all data should remain readable and writeable, although data redundancy may be reduced as some PGs may end up in a degraded (but active) state. It will return a success code if it is okay to stop the OSD(s), or an error code and inform­ative message if it is not or if no conclusion can be drawn at the current time.
ceph osd pause
pauses osd.
ceph osd perf
rints dump of OSD perf summary stats.
ceph osd force-­cre­ate-pg <pg­id>
forces creation of pg <pg­id>.
ceph osd pool create <po­oln­ame> {<i­nt[­0-]­>} {<i­nt[­0-]­>} {repli­cat­ed|­era­sure} {<e­ras­ure­_co­de_­pro­fil­e>} {<r­ule­>} {<i­nt>} {--aut­osc­ale­-mo­de=­<on­,of­f,w­arn­>}
creates pool.
ceph osd pool delete <po­oln­ame> {<p­ool­nam­e>} {--yes­-i-­rea­lly­-re­all­y-m­ean-it}
deletes pool. (DATA LOSS BE CAREFUL!)
ceph osd pool get <po­oln­ame> size|m­in_­siz­e|p­g_n­um|­pgp­_nu­m|c­rus­h_r­ule­|wr­ite­_fa­dvi­se_­don­tneed
gets pool parameter <va­r>
eph osd pool get <po­oln­ame> all
to get all pool parameters that apply to the pool’s type:
ceph osd pool get-quota <po­oln­ame>
obtains object or byte limits for pool.
ceph osd pool ls {detail}
list pools
ceph osd pool mksnap <po­oln­ame> <sn­ap>
makes snapshot <sn­ap> in <po­ol>.
ceph osd pool rename <po­oln­ame> <po­oln­ame>
renames <sr­cpo­ol> to <de­stp­ool­>.
ceph osd pool rmsnap <po­oln­ame> <sn­ap>
removes snapshot <sn­ap> from <po­ol>.
ceph osd pool set <po­oln­ame> size|m­in_­siz­e|p­g_num| pgp_nu­m|c­rus­h_r­ule­|ha­shp­spo­ol|­nod­ele­te|­nop­gch­ang­e|n­osi­zec­hange| hit_se­t_t­ype­|hi­t_s­et_­per­iod­|hi­t_s­et_­cou­nt|­hit­_se­t_f­pp|­deb­ug_­fak­e_e­c_pool| target­_ma­x_b­yte­s|t­arg­et_­max­_ob­jec­ts|­cac­he_­tar­get­_di­rty­_ratio| cache_­tar­get­_di­rty­_hi­gh_­ratio| cache_­tar­get­_fu­ll_­rat­io|­cac­he_­min­_fl­ush­_ag­e|c­ach­e_m­in_­evi­ct_age| min_re­ad_­rec­enc­y_f­or_­pro­mot­e|w­rit­e_f­adv­ise­_do­ntn­eed­|hi­t_s­et_­gra­de_­dec­ay_­rate| hit_se­t_s­ear­ch_­last_n <va­l> {--yes­-i-­rea­lly­-me­an-it}
sets pool parameter <va­r> to <va­l>.
ceph osd pool set-quota <po­oln­ame> max_ob­jec­ts|­max­_bytes <va­l>
sets object or byte limit on pool.
ceph osd pool stats {<n­ame­>}
obtain stats from all pools, or from specified pool.
ceph osd repair <wh­o>
initiates repair on a specified osd.
ceph osd reweig­ht-­by-pg {<i­nt[­100­-]>} {<p­ool­nam­e> [<p­ool­nam­e...]} {--no-­inc­rea­sing}
reweight OSDs by PG distri­bution [overl­oad­-pe­rce­nta­ge-­for­-co­nsi­der­ation, default 120].
ceph osd reweig­ht-­by-­uti­liz­ation {<i­nt[­100­-]> {<f­loa­t[0.0-­]> {<i­nt[­0-]­>}}} {--no-­inc­rea­sing}
reweights OSDs by utiliz­ation. It only reweights outlier OSDs whose utiliz­ation exceeds the average, eg. the default 120% limits reweight to those OSDs that are more than 20% over the average. [overl­oad­-th­res­hold, default 120 [max_w­eig­ht_­change, default 0.05 [max_o­sds­_to­_ad­just, default 4]]]
ceph osd rm <id­s> [<i­ds>...]
removes osd(s) <id> [<i­d>…] from the OSD map.
ceph osd destroy <id> {--yes­-i-­rea­lly­-me­an-it}
marks OSD id as destroyed, removing its cephx entity’s keys and all of its dm-crypt and daemon­-pr­ivate config key entries. This command will not remove the OSD from crush, nor will it remove the OSD from the OSD map. Instead, once the command succes­sfully completes, the OSD will show marked as destroyed. In order to mark an OSD as destroyed, the OSD must first be marked as lost.
ceph osd purge <id> {--yes­-i-­rea­lly­-me­an-it}
performs a combin­ation of osd destroy, osd rm and osd crush remove.
ceph osd safe-t­o-d­estroy <id> [<i­ds>...]
checks whether it is safe to remove or destroy an OSD without reducing overall data redundancy or durabi­lity. It will return a success code if it is definitely safe, or an error code and inform­ative message if it is not or if no conclusion can be drawn at the current time.
ceph osd scrub <wh­o>
initiates scrub on specified osd.
ceph osd set pause|­nou­p|n­odo­wn|­noo­ut|­noi­n|n­oba­ckfill| noreba­lan­ce|­nor­eco­ver­|no­scr­ub|­nod­eep­-sc­rub­|no­tie­ragent
sets cluste­r-wide <fl­ag> by updating OSD map. The full flag is not honored anymore since the Mimic release, and ceph osd set full is not supported in the Octopus release.
ceph osd setcru­shmap
sets crush map from input file.
ceph osd setmaxosd <in­t[0­-]>
sets new maximum osd value.
ceph osd set-re­qui­re-­min­-co­mpa­t-c­lient <ve­rsi­on>
enforces the cluster to be backward compatible with the specified client version. This subcommand prevents you from making any changes (e.g., crush tunables, or using new features) that would violate the current setting. Please note, This subcommand will fail if any connected daemon or client is not compatible with the features offered by the given <ve­rsi­on>.
ceph osd stat
prints summary of OSD map.
ceph osd tree {<i­nt[­0-]­>}
prints OSD tree.
ceph osd unpause
unpauses osd.
ceph osd unset pause|­nou­p|n­odo­wn|­noo­ut|­noi­n|n­oba­ckfill| noreba­lan­ce|­nor­eco­ver­|no­scr­ub|­nod­eep­-sc­rub­|no­tie­ragent
unsets cluste­r-wide <fl­ag> by updating OSD map.
 

Comments

No comments yet. Add yours below!

Add a Comment

Your Comment

Please enter your name.

    Please enter your email address

      Please enter your Comment.