ceph osd blocklist add {EntityAddr} {<float[0.0-]>} |
add {addr] to blocklist |
ceph osd blocklist ls |
show blocklisted clients |
ceph osd blocklist rm {EntityAddr} |
remove {addr] from blocklist |
ceph osd blocked-by |
prints a histogram of which OSDs are blocking their peers |
ceph osd new {<uuid>} {<id>} -i {<params.json>} |
To create a new OSD or recreate a previously destroyed OSD with specific id. Please look up Documentation if you're planning to use this command. |
ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] |
adds or updates crushmap position and weight for <name> with <weight> and location <args>. |
ceph osd crush add-bucket <name> <type> |
dds no-parent (probably root) crush bucket <name> of type <type>. |
ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] |
creates entry or moves existing entry for <name> <weight> at/to location <args>. |
ceph osd crush dump |
dumps crush map. |
ceph osd crush link <name> <args> [<args>...] |
links existing entry for <name> under location <args>. |
ceph osd crush move <name> <args> [<args>...] |
moves existing entry for <name> to location <args>. |
ceph osd crush remove <name> {<ancestor>} |
removes <name> from crush map (everywhere, or just at <ancestor>). |
ceph osd crush rename-bucket <srcname> <dstname> |
renames bucket <srcname> to <dstname> |
ceph osd crush reweight <name> <float[0.0-]> |
change <name>’s weight to <weight> in crush map. |
ceph osd crush reweight-all |
recalculate the weights for the tree to ensure they sum correctly |
eph osd crush reweight-subtree <name> <weight> |
changes all leaf items beneath <name> to <weight> in crush map |
ceph osd crush rm <name> {<ancestor>} |
removes <name> from crush map (everywhere, or just at <ancestor>). |
ceph osd crush rule create-erasure <name> {<profile>} |
creates crush rule <name> for erasure coded pool created with <profile> (default default). |
ceph osd crush rule create-simple <name> <root> <type> {firstn|indep} |
creates crush rule <name> to start from <root>, replicate across buckets of type <type>, using a choose mode of <firstn|indep> (default firstn; indep best for erasure pools). |
ceph osd crush rule dump {<name>} |
dumps crush rule <name> (default all). |
ceph osd crush rule ls |
lists crush rules. |
ceph osd crush rule rm <name> |
removes crush rule <name> |
ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] |
set with osdname/osd.id update crushmap position and weight for <name> to <weight> with location <args>. |
ceph osd crush show-tunables |
shows current crush tunables. |
ceph osd crush tree |
ows the crush buckets and items in a tree view. |
ceph osd crush unlink <name> {<ancestor>} |
unlinks <name> from crush map (everywhere, or just at <ancestor>). |
ceph osd df {plain|tree} |
shows OSD utilization |
ceph osd deep-scrub <who> |
initiates deep scrub on specified osd. |
ceph osd down <ids> [<ids>...] |
sets osd(s) <id> [<id>…] down. |
ceph osd dump |
prints summary of OSD map. |
ceph osd find <int[0-]> |
find osd <id> in the CRUSH map and shows its location. |
ceph osd getcrushmap |
gets CRUSH map. |
ceph osd getmap |
gets OSD map. |
ceph osd getmaxosd |
shows largest OSD id |
ceph osd in <ids> [<ids>...] |
sets osd(s) <id> [<id>…] in. |
ceph osd lost <int[0-]> {--yes-i-really-mean-it} |
marks osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL. |
ceph osd ls |
shows all OSD ids. |
ceph osd lspools |
lists pools |
ceph osd map <poolname> <objectname> |
finds pg for <object> in <pool>. |
ceph osd metadata {int[0-]} (default all) |
fetches metadata for osd <id>. |
ceph osd out <ids> [<ids>...] |
sets osd(s) <id> [<id>…] out. |
ceph osd ok-to-stop <id> [<ids>...] [--max <num>] |
checks whether the list of OSD(s) can be stopped without immediately making data unavailable. That is, all data should remain readable and writeable, although data redundancy may be reduced as some PGs may end up in a degraded (but active) state. It will return a success code if it is okay to stop the OSD(s), or an error code and informative message if it is not or if no conclusion can be drawn at the current time. |
ceph osd pause |
pauses osd. |
ceph osd perf |
rints dump of OSD perf summary stats. |
ceph osd force-create-pg <pgid> |
forces creation of pg <pgid>. |
ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure} {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>} |
creates pool. |
ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it} |
deletes pool. (DATA LOSS BE CAREFUL!) |
ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed |
gets pool parameter <var> |
eph osd pool get <poolname> all |
to get all pool parameters that apply to the pool’s type: |
ceph osd pool get-quota <poolname> |
obtains object or byte limits for pool. |
ceph osd pool ls {detail} |
list pools |
ceph osd pool mksnap <poolname> <snap> |
makes snapshot <snap> in <pool>. |
ceph osd pool rename <poolname> <poolname> |
renames <srcpool> to <destpool>. |
ceph osd pool rmsnap <poolname> <snap> |
removes snapshot <snap> from <pool>. |
ceph osd pool set <poolname> size|min_size|pg_num| pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange| hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool| target_max_bytes|target_max_objects|cache_target_dirty_ratio| cache_target_dirty_high_ratio| cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age| min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate| hit_set_search_last_n <val> {--yes-i-really-mean-it} |
sets pool parameter <var> to <val>. |
ceph osd pool set-quota <poolname> max_objects|max_bytes <val> |
sets object or byte limit on pool. |
ceph osd pool stats {<name>} |
obtain stats from all pools, or from specified pool. |
ceph osd repair <who> |
initiates repair on a specified osd. |
ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]} {--no-increasing} |
reweight OSDs by PG distribution [overload-percentage-for-consideration, default 120]. |
ceph osd reweight-by-utilization {<int[100-]> {<float[0.0-]> {<int[0-]>}}} {--no-increasing} |
reweights OSDs by utilization. It only reweights outlier OSDs whose utilization exceeds the average, eg. the default 120% limits reweight to those OSDs that are more than 20% over the average. [overload-threshold, default 120 [max_weight_change, default 0.05 [max_osds_to_adjust, default 4]]] |
ceph osd rm <ids> [<ids>...] |
removes osd(s) <id> [<id>…] from the OSD map. |
ceph osd destroy <id> {--yes-i-really-mean-it} |
marks OSD id as destroyed, removing its cephx entity’s keys and all of its dm-crypt and daemon-private config key entries. This command will not remove the OSD from crush, nor will it remove the OSD from the OSD map. Instead, once the command successfully completes, the OSD will show marked as destroyed. In order to mark an OSD as destroyed, the OSD must first be marked as lost. |
ceph osd purge <id> {--yes-i-really-mean-it} |
performs a combination of osd destroy, osd rm and osd crush remove. |
ceph osd safe-to-destroy <id> [<ids>...] |
checks whether it is safe to remove or destroy an OSD without reducing overall data redundancy or durability. It will return a success code if it is definitely safe, or an error code and informative message if it is not or if no conclusion can be drawn at the current time. |
ceph osd scrub <who> |
initiates scrub on specified osd. |
ceph osd set pause|noup|nodown|noout|noin|nobackfill| norebalance|norecover|noscrub|nodeep-scrub|notieragent |
sets cluster-wide <flag> by updating OSD map. The full flag is not honored anymore since the Mimic release, and ceph osd set full is not supported in the Octopus release. |
ceph osd setcrushmap |
sets crush map from input file. |
ceph osd setmaxosd <int[0-]> |
sets new maximum osd value. |
ceph osd set-require-min-compat-client <version> |
enforces the cluster to be backward compatible with the specified client version. This subcommand prevents you from making any changes (e.g., crush tunables, or using new features) that would violate the current setting. Please note, This subcommand will fail if any connected daemon or client is not compatible with the features offered by the given <version>. |
ceph osd stat |
prints summary of OSD map. |
ceph osd tree {<int[0-]>} |
prints OSD tree. |
ceph osd unpause |
unpauses osd. |
ceph osd unset pause|noup|nodown|noout|noin|nobackfill| norebalance|norecover|noscrub|nodeep-scrub|notieragent |
unsets cluster-wide <flag> by updating OSD map. |
Created By
Metadata
Comments
No comments yet. Add yours below!
Add a Comment