Was this page helpful?
Caution
You're viewing documentation for an unstable version of ScyllaDB Enterprise. Switch to the latest stable version.
When you add a new node, other nodes in the cluster stream data to the new node. This operation is called bootstrapping and may be time-consuming, depending on the data size and network bandwidth. If using a multi-availability-zone, make sure they are balanced.
You cannot add new nodes to the cluster if any existing node is down. Before adding new nodes, check the status of the nodes in the cluster using nodetool status command.
Log into one of the nodes in the cluster to collect the following information:
cluster_name -
grep cluster_name /etc/scylla/scylla.yaml
seeds -
grep seeds: /etc/scylla/scylla.yaml
endpoint_snitch -
grep endpoint_snitch /etc/scylla/scylla.yaml
ScyllaDB version -
scylla --version
Authenticator -
grep authenticator /etc/scylla/scylla.yaml
Install ScyllaDB on the nodes you want to add to the cluster. See Getting Started for further instructions. Follow the ScyllaDB installation procedure up to scylla.yaml
configuration phase. Make sure that the ScyllaDB version of the new node is identical to the other nodes in the cluster.
If the node starts during the process, follow What to do if a Node Starts Automatically.
Note
Make sure to use the same ScyllaDB patch release on the new/replaced node, to match the rest of the cluster. It is not recommended to add a new node with a different release to the cluster. For example, use the following for installing ScyllaDB patch release (use your deployed version)
ScyllaDB Enterprise - sudo yum install scylla-enterprise-2018.1.9
ScyllaDB open source - sudo yum install scylla-3.0.3
Note
It’s important to keep I/O scheduler configuration in sync on nodes with the same hardware. That’s why we recommend skipping running scylla_io_setup when provisioning a new node with exactly the same hardware setup as existing nodes in the cluster.
/etc/scylla.d/io.conf
/etc/scylla.d/io_properties.yaml
Using different I/O scheduler configuration may result in unnecessary bottlenecks.
On each node, edit the scylla.yaml
file /etc/scylla/
to configure the parameters listed below.
cluster_name - Specifies the name of the cluster.
listen_address - Specifies the IP address that ScyllaDB used to connect to the other ScyllaDB nodes in the cluster.
endpoint_snitch - Specifies the selected snitch.
rpc_address - Specifies the address for CQL client connections.
seeds - Specifies the IP address of an existing node in the cluster. The new node will use this IP to connect to the cluster and learn the cluster topology and state.
Start the nodes with the following command:
sudo systemctl start scylla-serverdocker exec -it some-scylla supervisorctl start scylla(with some-scylla container already running)
Verify that the nodes were added to the cluster using nodetool status command. Other nodes in the cluster will be streaming data to the new nodes, so the new nodes will be in Up Joining (UJ) status. Wait until the nodes’ status changes to Up Normal (UN) - the time depends on the data size and network bandwidth.
Example:
Nodes in the cluster are streaming data to the new node:
Datacenter: DC1
Status=Up/Down
State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.1.201 112.82 KB 256 32.7% 8d5ed9f4-7764-4dbd-bad8-43fddce94b7c B1
UN 192.168.1.202 91.11 KB 256 32.9% 125ed9f4-7777-1dbn-mac8-43fddce9123e B1
UJ 192.168.1.203 124.42 KB 256 32.6% 675ed9f4-6564-6dbd-can8-43fddce952gy B1
Nodes in the cluster finished streaming data to the new node:
Datacenter: DC1
Status=Up/Down
State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.1.201 112.82 KB 256 32.7% 8d5ed9f4-7764-4dbd-bad8-43fddce94b7c B1
UN 192.168.1.202 91.11 KB 256 32.9% 125ed9f4-7777-1dbn-mac8-43fddce9123e B1
UN 192.168.1.203 124.42 KB 256 32.6% 675ed9f4-6564-6dbd-can8-43fddce952gy B1
When the new node status is Up Normal (UN), run the nodetool cleanup command on all nodes in the cluster except for the new node that has just been added. Cleanup removes keys that were streamed to the newly added node and are no longer owned by the node.
Note
To prevent data resurrection, it’s essential to complete cleanup after adding nodes and before any node is decommissioned or removed. However, cleanup may consume significant resources. Use the following guideline to reduce cleanup impact:
Tip 1: When adding multiple nodes, run the cleanup operations after all nodes are added on all nodes but the last one to be added.
Tip 2: Postpone cleanup to low demand hours while ensuring it completes successfully before any node is decommissioned or removed.
Tip 3: Run cleanup one node at a time, reducing overall cluster impact.
Wait until the new node becomes UN (Up Normal) in the output of nodetool status on one of the old nodes.
If you are using ScyllaDB Monitoring, update the monitoring stack to monitor it. If you are using ScyllaDB Manager, make sure you install the Manager Agent, and Manager can access it.