Was this page helpful?
Remove a Node from a ScyllaDB Cluster (Down Scale)¶
Note
If you upgraded your cluster from version 2024.1, see After Upgrading from 2024.1.
You can remove nodes from your cluster to reduce its size.
Removing a Running Node¶
Run the nodetool status command to check the status of the nodes in your cluster.
Datacenter: DC1 Status=Up/Down State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 192.168.1.201 112.82 KB 256 32.7% 8d5ed9f4-7764-4dbd-bad8-43fddce94b7c B1 UN 192.168.1.202 91.11 KB 256 32.9% 125ed9f4-7777-1dbn-mac8-43fddce9123e B1 UN 192.168.1.203 124.42 KB 256 32.6% 675ed9f4-6564-6dbd-can8-43fddce952gy B1
If the node status is Up Normal (UN), run the nodetool decommission command to remove the node you are connected to. Using
nodetool decommission
is the recommended method for cluster scale-down operations. It prevents data loss by ensuring that the node you’re removing streams its data to the remaining nodes in the cluster.If the node is Joining, see Safely Remove a Joining Node.
If the node status is Down, see Removing an Unavailable Node.
Warning
Review current disk space utilization on existing nodes and make sure the amount of data streamed from the node being removed can fit into the disk space available on the remaining nodes. If there is not enough disk space on the remaining nodes, the removal of a node will fail. Add more storage to remaining nodes before starting the removal procedure.
Run the
nodetool netstats
command to monitor the progress of the token reallocation.Run the
nodetool status
command to verify that the node has been removed.Datacenter: DC1 Status=Up/Down State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 192.168.1.201 112.82 KB 256 32.7% 8d5ed9f4-7764-4dbd-bad8-43fddce94b7c B1 UN 192.168.1.202 91.11 KB 256 32.9% 125ed9f4-7777-1dbn-mac8-43fddce9123e B1
Manually remove the data and commit log stored on that node.
When a node is removed from the cluster, its data is not automatically removed. You need to manually remove the data to ensure it is no longer counted against the load on that node. Delete the data with the following commands:
sudo rm -rf /var/lib/scylla/data sudo find /var/lib/scylla/commitlog -type f -delete sudo find /var/lib/scylla/hints -type f -delete sudo find /var/lib/scylla/view_hints -type f -delete
After Upgrading from 2024.1¶
The procedure described above applies to clusters where consistent topology updates are enabled. The feature is automatically enabled in new clusters.
If you’ve upgraded an existing cluster from version 2024.1, ensure that you manually enabled consistent topology updates. Without consistent topology updates enabled, you must consider the following limitations while applying the procedure:
It’s essential to ensure the removed node will never come back to the cluster, which might adversely affect your data (data resurrection/loss). To prevent the removed node from rejoining the cluster, remove that node from the cluster network or VPC.
You can only remove one node at a time. You need to verify that the node has been removed before removing another one.
If
nodetool decommission
starts executing but fails in the middle, for example, due to a power loss, consult the Handling Membership Change Failures document.