ceph-crush map区分ssd和hdd资源池

古城微笑少年丶 2022-12-11 13:40 373阅读 0赞

环境

  • 3块SSD
  • 6块HDD

    1. # ceph -s
    2. cluster:
    3. id: b4c125fd-60ab-41ce-b51c-88833089a3ad
    4. health: HEALTH_OK
    5. services:
    6. mon: 3 daemons, quorum node1,node2,node3 (age 47m)
    7. mgr: node1(active, since 56m), standbys: node2, node3
    8. osd: 9 osds: 9 up (since 44m), 9 in (since 44m)
    9. data:
    10. pools: 1 pools, 1 pgs
    11. objects: 0 objects, 0 B
    12. usage: 9.1 GiB used, 30 TiB / 30 TiB avail
    13. pgs: 1 active+clean
    14. # ceph osd tree
    15. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
    16. -1 29.83720 root default
    17. -3 8.73286 host node1
    18. 0 hdd 3.63869 osd.0 up 1.00000 1.00000
    19. 1 hdd 3.63869 osd.1 up 1.00000 1.00000
    20. 6 ssd 1.45549 osd.6 up 1.00000 1.00000
    21. -5 12.37148 host node2
    22. 2 hdd 5.45799 osd.2 up 1.00000 1.00000
    23. 3 hdd 5.45799 osd.3 up 1.00000 1.00000
    24. 7 ssd 1.45549 osd.7 up 1.00000 1.00000
    25. -7 8.73286 host node3
    26. 4 hdd 3.63869 osd.4 up 1.00000 1.00000
    27. 5 hdd 3.63869 osd.5 up 1.00000 1.00000
    28. 8 ssd 1.45549 osd.8 up 1.00000 1.00000

需求

根据crush map将ssd和hdd分为两个资源池,iops高的放在ssd的pool,不高的放在hdd的pool

部署

  1. 查看crush map分布:

    ceph osd crush tree

    ID CLASS WEIGHT TYPE NAME
    -1 29.83720 root default
    -3 8.73286 host node1
    0 hdd 3.63869 osd.0
    1 hdd 3.63869 osd.1
    6 ssd 1.45549 osd.6
    -5 12.37148 host node2
    2 hdd 5.45799 osd.2
    3 hdd 5.45799 osd.3
    7 ssd 1.45549 osd.7
    -7 8.73286 host node3
    4 hdd 3.63869 osd.4
    5 hdd 3.63869 osd.5
    8 ssd 1.45549 osd.8

  2. 查看详细crush规则

    1. # ceph osd crush dump
    2. {
    3. "devices": [
    4. {
    5. "id": 0,
    6. "name": "osd.0",
    7. "class": "hdd"
    8. },
    9. {
    10. "id": 1,
    11. "name": "osd.1",
    12. "class": "hdd"
    13. },
    14. {
    15. "id": 2,
    16. "name": "osd.2",
    17. "class": "hdd"
    18. },
    19. {
    20. "id": 3,
    21. "name": "osd.3",
    22. "class": "hdd"
    23. },
    24. {
    25. "id": 4,
    26. "name": "osd.4",
    27. "class": "hdd"
    28. },
    29. {
    30. "id": 5,
    31. "name": "osd.5",
    32. "class": "hdd"
    33. },
    34. {
    35. "id": 6,
    36. "name": "osd.6",
    37. "class": "ssd"
    38. },
    39. {
    40. "id": 7,
    41. "name": "osd.7",
    42. "class": "ssd"
    43. },
    44. {
    45. "id": 8,
    46. "name": "osd.8",
    47. "class": "ssd"
    48. }
    49. ],
    50. "types": [
    51. {
    52. "type_id": 0,
    53. "name": "osd"
    54. },
    55. {
    56. "type_id": 1,
    57. "name": "host"
    58. },
    59. {
    60. "type_id": 2,
    61. "name": "chassis"
    62. },
    63. {
    64. "type_id": 3,
    65. "name": "rack"
    66. },
    67. {
    68. "type_id": 4,
    69. "name": "row"
    70. },
    71. {
    72. "type_id": 5,
    73. "name": "pdu"
    74. },
    75. {
    76. "type_id": 6,
    77. "name": "pod"
    78. },
    79. {
    80. "type_id": 7,
    81. "name": "room"
    82. },
    83. {
    84. "type_id": 8,
    85. "name": "datacenter"
    86. },
    87. {
    88. "type_id": 9,
    89. "name": "zone"
    90. },
    91. {
    92. "type_id": 10,
    93. "name": "region"
    94. },
    95. {
    96. "type_id": 11,
    97. "name": "root"
    98. }
    99. ],
    100. "buckets": [
    101. {
    102. "id": -1,
    103. "name": "default",
    104. "type_id": 11,
    105. "type_name": "root",
    106. "weight": 1955411,
    107. "alg": "straw2",
    108. "hash": "rjenkins1",
    109. "items": [
    110. {
    111. "id": -3,
    112. "weight": 572317,
    113. "pos": 0
    114. },
    115. {
    116. "id": -5,
    117. "weight": 810777,
    118. "pos": 1
    119. },
    120. {
    121. "id": -7,
    122. "weight": 572317,
    123. "pos": 2
    124. }
    125. ]
    126. },
    127. {
    128. "id": -2,
    129. "name": "default~hdd",
    130. "type_id": 11,
    131. "type_name": "root",
    132. "weight": 1669250,
    133. "alg": "straw2",
    134. "hash": "rjenkins1",
    135. "items": [
    136. {
    137. "id": -4,
    138. "weight": 476930,
    139. "pos": 0
    140. },
    141. {
    142. "id": -6,
    143. "weight": 715390,
    144. "pos": 1
    145. },
    146. {
    147. "id": -8,
    148. "weight": 476930,
    149. "pos": 2
    150. }
    151. ]
    152. },
    153. {
    154. "id": -3,
    155. "name": "node1",
    156. "type_id": 1,
    157. "type_name": "host",
    158. "weight": 572317,
    159. "alg": "straw2",
    160. "hash": "rjenkins1",
    161. "items": [
    162. {
    163. "id": 0,
    164. "weight": 238465,
    165. "pos": 0
    166. },
    167. {
    168. "id": 1,
    169. "weight": 238465,
    170. "pos": 1
    171. },
    172. {
    173. "id": 6,
    174. "weight": 95387,
    175. "pos": 2
    176. }
    177. ]
    178. },
    179. {
    180. "id": -4,
    181. "name": "node1~hdd",
    182. "type_id": 1,
    183. "type_name": "host",
    184. "weight": 476930,
    185. "alg": "straw2",
    186. "hash": "rjenkins1",
    187. "items": [
    188. {
    189. "id": 0,
    190. "weight": 238465,
    191. "pos": 0
    192. },
    193. {
    194. "id": 1,
    195. "weight": 238465,
    196. "pos": 1
    197. }
    198. ]
    199. },
    200. {
    201. "id": -5,
    202. "name": "node2",
    203. "type_id": 1,
    204. "type_name": "host",
    205. "weight": 810777,
    206. "alg": "straw2",
    207. "hash": "rjenkins1",
    208. "items": [
    209. {
    210. "id": 2,
    211. "weight": 357695,
    212. "pos": 0
    213. },
    214. {
    215. "id": 3,
    216. "weight": 357695,
    217. "pos": 1
    218. },
    219. {
    220. "id": 7,
    221. "weight": 95387,
    222. "pos": 2
    223. }
    224. ]
    225. },
    226. {
    227. "id": -6,
    228. "name": "node2~hdd",
    229. "type_id": 1,
    230. "type_name": "host",
    231. "weight": 715390,
    232. "alg": "straw2",
    233. "hash": "rjenkins1",
    234. "items": [
    235. {
    236. "id": 2,
    237. "weight": 357695,
    238. "pos": 0
    239. },
    240. {
    241. "id": 3,
    242. "weight": 357695,
    243. "pos": 1
    244. }
    245. ]
    246. },
    247. {
    248. "id": -7,
    249. "name": "node3",
    250. "type_id": 1,
    251. "type_name": "host",
    252. "weight": 572317,
    253. "alg": "straw2",
    254. "hash": "rjenkins1",
    255. "items": [
    256. {
    257. "id": 4,
    258. "weight": 238465,
    259. "pos": 0
    260. },
    261. {
    262. "id": 5,
    263. "weight": 238465,
    264. "pos": 1
    265. },
    266. {
    267. "id": 8,
    268. "weight": 95387,
    269. "pos": 2
    270. }
    271. ]
    272. },
    273. {
    274. "id": -8,
    275. "name": "node3~hdd",
    276. "type_id": 1,
    277. "type_name": "host",
    278. "weight": 476930,
    279. "alg": "straw2",
    280. "hash": "rjenkins1",
    281. "items": [
    282. {
    283. "id": 4,
    284. "weight": 238465,
    285. "pos": 0
    286. },
    287. {
    288. "id": 5,
    289. "weight": 238465,
    290. "pos": 1
    291. }
    292. ]
    293. },
    294. {
    295. "id": -9,
    296. "name": "node1~ssd",
    297. "type_id": 1,
    298. "type_name": "host",
    299. "weight": 95387,
    300. "alg": "straw2",
    301. "hash": "rjenkins1",
    302. "items": [
    303. {
    304. "id": 6,
    305. "weight": 95387,
    306. "pos": 0
    307. }
    308. ]
    309. },
    310. {
    311. "id": -10,
    312. "name": "node2~ssd",
    313. "type_id": 1,
    314. "type_name": "host",
    315. "weight": 95387,
    316. "alg": "straw2",
    317. "hash": "rjenkins1",
    318. "items": [
    319. {
    320. "id": 7,
    321. "weight": 95387,
    322. "pos": 0
    323. }
    324. ]
    325. },
    326. {
    327. "id": -11,
    328. "name": "node3~ssd",
    329. "type_id": 1,
    330. "type_name": "host",
    331. "weight": 95387,
    332. "alg": "straw2",
    333. "hash": "rjenkins1",
    334. "items": [
    335. {
    336. "id": 8,
    337. "weight": 95387,
    338. "pos": 0
    339. }
    340. ]
    341. },
    342. {
    343. "id": -12,
    344. "name": "default~ssd",
    345. "type_id": 11,
    346. "type_name": "root",
    347. "weight": 286161,
    348. "alg": "straw2",
    349. "hash": "rjenkins1",
    350. "items": [
    351. {
    352. "id": -9,
    353. "weight": 95387,
    354. "pos": 0
    355. },
    356. {
    357. "id": -10,
    358. "weight": 95387,
    359. "pos": 1
    360. },
    361. {
    362. "id": -11,
    363. "weight": 95387,
    364. "pos": 2
    365. }
    366. ]
    367. }
    368. ],
    369. "rules": [
    370. {
    371. "rule_id": 0,
    372. "rule_name": "replicated_rule",
    373. "ruleset": 0,
    374. "type": 1,
    375. "min_size": 1,
    376. "max_size": 10,
    377. "steps": [
    378. {
    379. "op": "take",
    380. "item": -1,
    381. "item_name": "default"
    382. },
    383. {
    384. "op": "chooseleaf_firstn",
    385. "num": 0,
    386. "type": "host"
    387. },
    388. {
    389. "op": "emit"
    390. }
    391. ]
    392. }
    393. ],
    394. "tunables": {
    395. "choose_local_tries": 0,
    396. "choose_local_fallback_tries": 0,
    397. "choose_total_tries": 50,
    398. "chooseleaf_descend_once": 1,
    399. "chooseleaf_vary_r": 1,
    400. "chooseleaf_stable": 1,
    401. "straw_calc_version": 1,
    402. "allowed_bucket_algs": 54,
    403. "profile": "jewel",
    404. "optimal_tunables": 1,
    405. "legacy_tunables": 0,
    406. "minimum_required_version": "jewel",
    407. "require_feature_tunables": 1,
    408. "require_feature_tunables2": 1,
    409. "has_v2_rules": 0,
    410. "require_feature_tunables3": 1,
    411. "has_v3_rules": 0,
    412. "has_v4_buckets": 1,
    413. "require_feature_tunables5": 1,
    414. "has_v5_rules": 0
    415. },
    416. "choose_args": {}
    417. }
  3. 查看当前pool写入osd的规则

    1. # ceph osd crush rule ls
    2. replicated_rule

手动编辑crush规则

  1. 导出crushmap规则

    1. # ceph osd getcrushmap -o crushmap20200922.bin
    2. 19
  2. 编译导出的crushmap文件

    1. # crushtool -d crushmap20200922.bin -o crushmap20200922.txt
    2. [root@node1 ceph]# ll
    3. total 8
    4. -rw-r--r-- 1 root root 1326 Sep 22 23:06 crushmap20200922.bin
    5. -rw-r--r-- 1 root root 1935 Sep 22 23:07 crushmap20200922.txt
  3. 修改规则

    1. 添加三个buckets来定义ssd host 并且将ssd host从原来的hdd host中删除 并且将以前host顺便改名

      1. # 原来的host
      2. ...
      3. # buckets 定义osd怎么存放的
      4. host node1 {
      5. id -3 # do not change unnecessarily
      6. id -4 class hdd # do not change unnecessarily
      7. id -9 class ssd # do not change unnecessarily
      8. # weight 8.733
      9. alg straw2
      10. hash 0 # rjenkins1
      11. item osd.0 weight 3.639
      12. item osd.1 weight 3.639
      13. item osd.6 weight 1.455
      14. }
      15. host node2 {
      16. id -5 # do not change unnecessarily
      17. id -6 class hdd # do not change unnecessarily
      18. id -10 class ssd # do not change unnecessarily
      19. # weight 12.371
      20. alg straw2
      21. hash 0 # rjenkins1
      22. item osd.2 weight 5.458
      23. item osd.3 weight 5.458
      24. item osd.7 weight 1.455
      25. }
      26. host node3 {
      27. id -7 # do not change unnecessarily
      28. id -8 class hdd # do not change unnecessarily
      29. id -11 class ssd # do not change unnecessarily
      30. # weight 8.733
      31. alg straw2
      32. hash 0 # rjenkins1
      33. item osd.4 weight 3.639
      34. item osd.5 weight 3.639
      35. item osd.8 weight 1.455
      36. }
      37. ...
      38. # 修改后的host
      39. ...
      40. # buckets 定义osd怎么存放的
      41. host node1-hdd {
      42. id -3 # do not change unnecessarily
      43. id -4 class hdd # do not change unnecessarily
      44. # weight 8.733
      45. alg straw2
      46. hash 0 # rjenkins1
      47. item osd.0 weight 3.639
      48. item osd.1 weight 3.639
      49. }
      50. host node2-hdd {
      51. id -5 # do not change unnecessarily
      52. id -6 class hdd # do not change unnecessarily
      53. # weight 12.371
      54. alg straw2
      55. hash 0 # rjenkins1
      56. item osd.2 weight 5.458
      57. item osd.3 weight 5.458
      58. }
      59. host node3-hdd {
      60. id -7 # do not change unnecessarily
      61. id -8 class hdd # do not change unnecessarily
      62. # weight 8.733
      63. alg straw2
      64. hash 0 # rjenkins1
      65. item osd.4 weight 3.639
      66. item osd.5 weight 3.639
      67. }
      68. host node1-ssd {
      69. id -9 class ssd # do not change unnecessarily
      70. # weight 8.733
      71. alg straw2
      72. hash 0 # rjenkins1
      73. item osd.6 weight 1.455
      74. }
      75. host node2-ssd {
      76. id -10 class ssd # do not change unnecessarily
      77. # weight 12.371
      78. alg straw2
      79. hash 0 # rjenkins1
      80. item osd.7 weight 1.455
      81. }
      82. host node3-ssd {
      83. id -11 class ssd # do not change unnecessarily
      84. # weight 8.733
      85. alg straw2
      86. hash 0 # rjenkins1
      87. item osd.8 weight 1.455
      88. }
      89. ...
    2. 定义一个上层规则去调用新加的三个host,并且将以前的root default改为root hdd 并且调节权重,权重就是所有osd容量大小相加,比如单盘1.8TB,系统容量为1.635,权重为1.63539

      1. #之前的default调用
      2. ...
      3. root default {
      4. id -1 # do not change unnecessarily
      5. id -2 class hdd # do not change unnecessarily
      6. id -12 class ssd # do not change unnecessarily
      7. # weight 29.837
      8. alg straw2
      9. hash 0 # rjenkins1
      10. item node1 weight 8.733
      11. item node2 weight 12.371
      12. item node3 weight 8.733
      13. }
      14. ...
      15. #修改后的调用
      16. ...
      17. root hdd {
      18. id -1 # do not change unnecessarily
      19. id -2 class hdd # do not change unnecessarily
      20. # weight 29.837
      21. alg straw2
      22. hash 0 # rjenkins1
      23. item node1-hdd weight 7.278
      24. item node2-hdd weight 10.916
      25. item node3-hdd weight 7.278
      26. }
      27. root ssd {
      28. # weight 29.837
      29. alg straw2
      30. hash 0 # rjenkins1
      31. item node1-ssd weight 1.456
      32. item node2-ssd weight 1.456
      33. item node3-ssd weight 1.456
      34. }
      35. ...
    3. 最后定义一个rule规则关联到上层规则,并且将以前的replicated_rule改为hdd_rule

      1. #之前的rule
      2. ...
      3. rule replicated_rule {
      4. id 0
      5. type replicated
      6. min_size 1
      7. max_size 10
      8. step take default
      9. step chooseleaf firstn 0 type host
      10. step emit
      11. }
      12. ...
      13. #修改后rule
      14. ...
      15. rule hdd_rule {
      16. id 0
      17. type replicated
      18. min_size 1
      19. max_size 10
      20. step take hdd
      21. step chooseleaf firstn 0 type host
      22. step emit
      23. }
      24. rule ssd_rule {
      25. id 1
      26. type replicated
      27. min_size 1
      28. max_size 10
      29. step take ssd
      30. step chooseleaf firstn 0 type host
      31. step emit
      32. }
      33. ...
    4. 修改完成后的规则

      1. # begin crush map
      2. tunable choose_local_tries 0
      3. tunable choose_local_fallback_tries 0
      4. tunable choose_total_tries 50
      5. tunable chooseleaf_descend_once 1
      6. tunable chooseleaf_vary_r 1
      7. tunable chooseleaf_stable 1
      8. tunable straw_calc_version 1
      9. tunable allowed_bucket_algs 54
      10. # devices 定义osd信息
      11. device 0 osd.0 class hdd
      12. device 1 osd.1 class hdd
      13. device 2 osd.2 class hdd
      14. device 3 osd.3 class hdd
      15. device 4 osd.4 class hdd
      16. device 5 osd.5 class hdd
      17. device 6 osd.6 class ssd
      18. device 7 osd.7 class ssd
      19. device 8 osd.8 class ssd
      20. # types 定义数组组织类型 osd 主机 机柜 机房等等
      21. type 0 osd
      22. type 1 host
      23. type 2 chassis
      24. type 3 rack
      25. type 4 row
      26. type 5 pdu
      27. type 6 pod
      28. type 7 room
      29. type 8 datacenter
      30. type 9 zone
      31. type 10 region
      32. type 11 root
      33. # buckets 定义osd怎么存放的
      34. host node1-hdd {
      35. id -3 # do not change unnecessarily
      36. id -4 class hdd # do not change unnecessarily
      37. # weight 8.733
      38. alg straw2
      39. hash 0 # rjenkins1
      40. item osd.0 weight 3.639
      41. item osd.1 weight 3.639
      42. }
      43. host node2-hdd {
      44. id -5 # do not change unnecessarily
      45. id -6 class hdd # do not change unnecessarily
      46. # weight 12.371
      47. alg straw2
      48. hash 0 # rjenkins1
      49. item osd.2 weight 5.458
      50. item osd.3 weight 5.458
      51. }
      52. host node3-hdd {
      53. id -7 # do not change unnecessarily
      54. id -8 class hdd # do not change unnecessarily
      55. # weight 8.733
      56. alg straw2
      57. hash 0 # rjenkins1
      58. item osd.4 weight 3.639
      59. item osd.5 weight 3.639
      60. }
      61. host node1-ssd {
      62. id -9 class ssd # do not change unnecessarily
      63. # weight 8.733
      64. alg straw2
      65. hash 0 # rjenkins1
      66. item osd.6 weight 1.455
      67. }
      68. host node2-ssd {
      69. id -10 class ssd # do not change unnecessarily
      70. # weight 12.371
      71. alg straw2
      72. hash 0 # rjenkins1
      73. item osd.7 weight 1.455
      74. }
      75. host node3-ssd {
      76. id -11 class ssd # do not change unnecessarily
      77. # weight 8.733
      78. alg straw2
      79. hash 0 # rjenkins1
      80. item osd.8 weight 1.455
      81. }
      82. root hdd {
      83. id -1 # do not change unnecessarily
      84. id -2 class hdd # do not change unnecessarily
      85. # weight 29.837
      86. alg straw2
      87. hash 0 # rjenkins1
      88. item node1-hdd weight 7.278
      89. item node2-hdd weight 10.916
      90. item node3-hdd weight 7.278
      91. }
      92. root ssd {
      93. # weight 29.837
      94. alg straw2
      95. hash 0 # rjenkins1
      96. item node1-ssd weight 1.456
      97. item node2-ssd weight 1.456
      98. item node3-ssd weight 1.456
      99. }
      100. # rules 最终的root规则 pool应用到哪个
      101. rule hdd_rule {
      102. id 0
      103. type replicated
      104. min_size 1
      105. max_size 10
      106. step take hdd
      107. step chooseleaf firstn 0 type host
      108. step emit
      109. }
      110. rule ssd_rule {
      111. id 1
      112. type replicated
      113. min_size 1
      114. max_size 10
      115. step take ssd
      116. step chooseleaf firstn 0 type host
      117. step emit
      118. }
      119. # end crush map
    5. 将修改完的规则编译为bin文件

      1. # crushtool -c crushmap20200922.txt -o crushmap20200922-new.bin
      2. # ll
      3. total 12
      4. -rw-r--r-- 1 root root 1326 Sep 22 23:06 crushmap20200922.bin
      5. -rw-r--r-- 1 root root 2113 Sep 22 23:33 crushmap20200922-new.bin
      6. -rw-r--r-- 1 root root 2516 Sep 22 23:33 crushmap20200922.txt
    6. 应用新的规则

      1. # ceph osd setcrushmap -i crushmap20200922-new.bin
      2. 20
    7. 应用后新老规则对比,已经可以区分出ssd和hdd规则

      1. # 应用前
      2. # ceph osd tree
      3. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
      4. -1 29.83720 root default
      5. -3 8.73286 host node1
      6. 0 hdd 3.63869 osd.0 up 1.00000 1.00000
      7. 1 hdd 3.63869 osd.1 up 1.00000 1.00000
      8. 6 ssd 1.45549 osd.6 up 1.00000 1.00000
      9. -5 12.37148 host node2
      10. 2 hdd 5.45799 osd.2 up 1.00000 1.00000
      11. 3 hdd 5.45799 osd.3 up 1.00000 1.00000
      12. 7 ssd 1.45549 osd.7 up 1.00000 1.00000
      13. -7 8.73286 host node3
      14. 4 hdd 3.63869 osd.4 up 1.00000 1.00000
      15. 5 hdd 3.63869 osd.5 up 1.00000 1.00000
      16. 8 ssd 1.45549 osd.8 up 1.00000 1.00000
      17. # 应用后 可以看出有两个root规则,分为hdd和ssd
      18. ceph osd tree
      19. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
      20. -15 4.36798 root ssd
      21. -12 1.45599 host node1-ssd
      22. 6 ssd 1.45499 osd.6 up 1.00000 1.00000
      23. -13 1.45599 host node2-ssd
      24. 7 ssd 1.45499 osd.7 up 1.00000 1.00000
      25. -14 1.45599 host node3-ssd
      26. 8 ssd 1.45499 osd.8 up 1.00000 1.00000
      27. -1 25.47200 root hdd
      28. -3 7.27800 host node1-hdd
      29. 0 hdd 3.63899 osd.0 up 1.00000 1.00000
      30. 1 hdd 3.63899 osd.1 up 1.00000 1.00000
      31. -5 10.91600 host node2-hdd
      32. 2 hdd 5.45799 osd.2 up 1.00000 1.00000
      33. 3 hdd 5.45799 osd.3 up 1.00000 1.00000
      34. -7 7.27800 host node3-hdd
      35. 4 hdd 3.63899 osd.4 up 1.00000 1.00000
      36. 5 hdd 3.63899 osd.5 up 1.00000 1.00000
    8. 验证,将之前的ceph-demo资源池的规则改为ssd的

      1. # ceph osd pool get ceph-demo crush_rule
      2. crush_rule: hdd_rule
      3. # ceph osd pool set ceph-demo crush_rule ssd_rule
      4. set pool 2 crush_rule to ssd_rule
      5. # ceph osd pool get ceph-demo crush_rule
      6. crush_rule: ssd_rule
      7. #创建一个100G的rbd
      8. # rbd create ceph-demo/rbd-demo.img --size 100G
      9. # rbd info ceph-demo/rbd-demo.img
      10. rbd image 'rbd-demo.img':
      11. size 100 GiB in 25600 objects
      12. order 22 (4 MiB objects)
      13. snapshot_count: 0
      14. id: 39a814402607
      15. block_name_prefix: rbd_data.39a814402607
      16. format: 2
      17. features: layering
      18. op_features:
      19. flags:
      20. create_timestamp: Tue Sep 22 23:42:55 2020
      21. access_timestamp: Tue Sep 22 23:42:55 2020
      22. modify_timestamp: Tue Sep 22 23:42:55 2020
      23. #将ceph-demo设置为rbd类型:
      24. # ceph osd pool application enable ceph-demo rbd
      25. # 挂载到本地
      26. # rbd map ceph-demo/rbd-demo.img
      27. /dev/rbd0
      28. ]# mkfs.xfs /dev/rbd0
      29. meta-data=/dev/rbd0 isize=512 agcount=16, agsize=1638400 blks
      30. = sectsz=512 attr=2, projid32bit=1
      31. = crc=1 finobt=0, sparse=0
      32. data = bsize=4096 blocks=26214400, imaxpct=25
      33. = sunit=1024 swidth=1024 blks
      34. naming =version 2 bsize=4096 ascii-ci=0 ftype=1
      35. log =internal log bsize=4096 blocks=12800, version=2
      36. = sectsz=512 sunit=8 blks, lazy-count=1
      37. realtime =none extsz=4096 blocks=0, rtextents=0
      38. [root@node1 ceph]# mkdir /mnt/rdb-demo
      39. [root@node1 ceph]# mount /dev/rbd0 /mnt/rdb-demo
      40. #最后测速测一把 写入1.2G 符合ssd
      41. # time dd if=/dev/zero of=test.dbf bs=8k count=300000
      42. 300000+0 records in
      43. 300000+0 records out
      44. 2457600000 bytes (2.5 GB) copied, 2.06784 s, 1.2 GB/s
      45. real 0m2.072s
      46. user 0m0.318s
      47. sys 0m1.742s
      48. #同时也可以看到文件落在三个ssd上
      49. # ceph osd map ceph-demo rbd-demo.img
      50. osdmap e81 pool 'ceph-demo' (2) object 'rbd-demo.img' -> pg 2.7d92bf55 (2.15) -> up ([6,8,7], p6) acting ([6,8,7], p6)
    9. 禁用掉禁用在增加或者修改osd的时候自动分配curshmap,否在在重启服务或者添加osd的时候 会有大量的pg迁移。出现事故,在配置文件中添加osd crush update on start = false 添加了这个参数后,后面加了osd后 需要手动将osd添加到curshmap规则里 重新应用

      1. # vim ceph.conf
      2. ...
      3. [osd]
      4. osd crush update on start = false
      5. ...
      6. #推送到其他节点
      7. # ceph-deploy --overwrite-conf config push node1 node2 node3
      8. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
      9. [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push node1 node2 node3
      10. [ceph_deploy.cli][INFO ] ceph-deploy options:
      11. [ceph_deploy.cli][INFO ] username : None
      12. [ceph_deploy.cli][INFO ] verbose : False
      13. [ceph_deploy.cli][INFO ] overwrite_conf : True
      14. [ceph_deploy.cli][INFO ] subcommand : push
      15. [ceph_deploy.cli][INFO ] quiet : False
      16. [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1e65ed7b00>
      17. [ceph_deploy.cli][INFO ] cluster : ceph
      18. [ceph_deploy.cli][INFO ] client : ['node1', 'node2', 'node3']
      19. [ceph_deploy.cli][INFO ] func : <function config at 0x7f1e667c0c08>
      20. [ceph_deploy.cli][INFO ] ceph_conf : None
      21. [ceph_deploy.cli][INFO ] default_release : False
      22. [ceph_deploy.config][DEBUG ] Pushing config to node1
      23. [node1][DEBUG ] connected to host: node1
      24. [node1][DEBUG ] detect platform information from remote host
      25. [node1][DEBUG ] detect machine type
      26. [node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
      27. [ceph_deploy.config][DEBUG ] Pushing config to node2
      28. [node2][DEBUG ] connected to host: node2
      29. [node2][DEBUG ] detect platform information from remote host
      30. [node2][DEBUG ] detect machine type
      31. [node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
      32. [ceph_deploy.config][DEBUG ] Pushing config to node3
      33. [node3][DEBUG ] connected to host: node3
      34. [node3][DEBUG ] detect platform information from remote host
      35. [node3][DEBUG ] detect machine type
      36. [node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
      37. #重启osd 三台节点全部重启
      38. # systemctl restart ceph-osd.target

      注意事项

      • 在扩容和删除osd还有编辑curshmap的时候最好备份一个
      • 初始化就规划好curshmap的规则,否则应用过程中改变的话,就会有大量的pg迁移
      • 重启服务的时候就会自动修改curshmap,所以需要备份,也可以禁用在增加或者修改osd的时候自动分配curshmap

发表评论

表情:
评论列表 (有 0 条评论,373人围观)

还没有评论,来说两句吧...

相关阅读

    相关 静态资源动态资源区分

    一、概念: 静态资源:一般客户端发送请求到网络服务器,网络服务器从内存在取到相应的文件,返回给客户端,客户端解析并渲染显示出来。(直接) 动态资源:一般客户端请求的动态