Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ceph cleanup pgs active+remapped

i use a 3 node Ceph cluster based on Ubuntu Server 14.04. Actually my problem is that 192 placement groups (pgs) are in the status active+remapped. All nodes are online and all osds are online.

How can i cleanup the pgs?

root@node1:~# ceph status
    cluster 776020a6-5c44-49c8-93e4-4a83703d4315
     health HEALTH_WARN 192 pgs stuck unclean
     monmap e1: 3 mons at     {node1=192.168.178.101:6789/0,node2=192.168.178.102:6789/0,node3=192.168.178.103:6789/0}, election epoch 14, quorum 0,1,2 node1,node2,node3
 osdmap e235: 12 osds: 12 up, 12 in
  pgmap v341719: 392 pgs, 5 pools, 225 GB data, 70891 objects
        597 GB used, 6604 GB / 7201 GB avail
             200 active+clean
             192 active+remapped

root@node1:~# ceph osd tree
# id    weight  type name       up/down reweight
-9      7.02    root erasure
-3      2.34            host node2-erasure
7       0.9                     osd.7   up      1
6       0.9                     osd.6   up      1
-5      2.34            host node3-erasure
11      0.9                     osd.11  up      1
10      0.9                     osd.10  up      1
-7      2.34            host node1-erasure
2       0.9                     osd.2   up      1
3       0.9                     osd.3   up      1
-8      7.02    root cache
-2      2.34            host node2-cache
5       0.27                    osd.5   up      1
4       0.27                    osd.4   up      1
-4      2.34            host node3-cache
9       0.27                    osd.9   up      1
8       0.27                    osd.8   up      1
-6      2.34            host node1-cache
0       0.27                    osd.0   up      1
1       0.27                    osd.1   up      1
-1      0       root default

Does someone has an idea?

Best regards schlussbilanz

like image 605
schlussbilanz Avatar asked Feb 28 '26 01:02

schlussbilanz


1 Answers

Found a solution here that worked for me:

https://www.spinics.net/lists/ceph-users/msg71083.html

Increase the choose_total_tries tunable by modifying the crush map (which also seems to be where you can set the tunable). Here is the "recipe" the author mentions and which worked for me:

# ceph osd getcrushmap -o crush.map
# crushtool -d crush.map -o crush.txt
# vi crush.txt  # and change the tunable choose_total_tries 100
# crushtool -c crush.txt -o crush.map2
# ceph osd setcrushmap -i crush.map2
like image 69
tksfz Avatar answered Mar 03 '26 02:03

tksfz



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!