TigerJr Guru
Joined: 19 Jun 2007 Posts: 540
|
Posted: Sat Aug 24, 2019 2:54 am Post subject: Rbd pools in ceph luminous with crush rules |
|
|
Intro:
Im using SuperChassis 721TQ-250B with ASUS P10S-I Mainboard for my home storage, that is not NAS.
For storage space i use 4 Seagate ST2000DM008-2FR102 series hard drives connected via miniSASHD port, for ceph install and ceph journals i use 1 SSD SmartBuy 120GB connected via M.2 port.
Created 3 pools: rbd, rbd128, c128 (with 128 placement groups)
And created 2 crush rules, one for HDD and one for SSD placements(thirst rule are default).
Code: | # buckets
host moon-8 {
id -3 # do not change unnecessarily
id -2 class hdd # do not change unnecessarily
id -6 class ssd # do not change unnecessarily
# weight 7.274
alg straw2
hash 0 # rjenkins1
item osd.0 weight 1.818
item osd.1 weight 1.818
item osd.2 weight 1.818
item osd.3 weight 1.818
}
root default {
id -1 # do not change unnecessarily
id -4 class hdd # do not change unnecessarily
id -9 class ssd # do not change unnecessarily
# weight 7.274
alg straw2
hash 0 # rjenkins1
item moon-8 weight 7.274
}
root cache {
id -7 # do not change unnecessarily
id -8 class hdd # do not change unnecessarily
id -5 class ssd # do not change unnecessarily
# weight 0.045
alg straw2
hash 0 # rjenkins1
item osd.4 weight 0.045
}
# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule cache1 {
id 1
type replicated
min_size 1
max_size 10
step take cache
step choose firstn 0 type osd
step emit
}
rule hdd-rule {
id 2
type replicated
min_size 1
max_size 10
step take default
step choose firstn 0 type osd
step emit
} |
ceph osd dump
Code: | pool 8 'rbd' replicated size 2 min_size 1 crush_rule 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 467 lfor 441/441 flags hashpspool stripe_width 0 application rbd
removed_snaps [1~3]
pool 10 'rbd128' replicated size 2 min_size 1 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 128 last_change 611 lfor 353/353 flags hashpspool tiers 11 read_tier 11 write_tier 11 stripe_width 0 application rbd
removed_snaps [1~3]
pool 11 'c128' replicated size 1 min_size 1 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 128 last_change 633 lfor 353/353 flags hashpspool,incomplete_clones tier_of 10 cache_mode writeback target_bytes 42949672960 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 14400s x12 decay_rate 0 search_last_n 0 min_read_recency_for_promote 2 min_write_recency_for_promote 2 stripe_width 0
removed_snaps [1~3]
pool 13 'ssd_r2' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 643 flags hashpspool stripe_width 0 application rbd
max_osd 5
|
My problem is at creating SSD rule, it doesn't worked. Then i set pool 10 crush_rule 1, than i can't write data there.
Placement groups summary
Code: |
pool : 10 11 13 8 | SUM
------------------------------------------------
osd.0 59 32 5 38 | 134
osd.1 63 28 7 30 | 128
osd.2 69 34 6 34 | 143
osd.3 65 34 5 26 | 130
osd.4 0 0 105 0 | 105
------------------------------------------------
SUM : 256 128 128 128 |
|
Can any one help with ceph crush rules? _________________ Do not use gentoo, it die |
|