ScyllaDB University LIVE, FREE Virtual Training Event | March 21
Register for Free
ScyllaDB Documentation Logo Documentation
  • Server
  • Cloud
  • Tools
    • ScyllaDB Manager
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
  • Drivers
    • CQL Drivers
    • DynamoDB Drivers
  • Resources
    • ScyllaDB University
    • Community Forum
    • Tutorials
Download
ScyllaDB Docs ScyllaDB Enterprise ScyllaDB for Administrators Procedures Cluster Management Procedures Adding a New Node Into an Existing ScyllaDB Cluster (Out Scale)

Caution

You're viewing documentation for a previous version. Switch to the latest stable version.

Adding a New Node Into an Existing ScyllaDB Cluster (Out Scale)¶

Note

If you upgraded your cluster from version 2024.1, see After Upgrading from 2024.1.

When you add a new node, other nodes in the cluster stream data to the new node. This operation is called bootstrapping and may be time-consuming, depending on the data size and network bandwidth. If using a multi-availability-zone, make sure they are balanced.

Prerequisites¶

Check the Status of Nodes¶

You cannot add new nodes to the cluster if any existing node is down. Before adding new nodes, check the status of the nodes in the cluster using nodetool status command.

Collect Cluster Information¶

Log into one of the nodes in the cluster to collect the following information:

  • cluster_name - grep cluster_name /etc/scylla/scylla.yaml

  • seeds - grep seeds: /etc/scylla/scylla.yaml

  • endpoint_snitch - grep endpoint_snitch /etc/scylla/scylla.yaml

  • Scylla version - scylla --version

  • Authenticator - grep authenticator /etc/scylla/scylla.yaml

Procedure¶

  1. Install ScyllaDB on the nodes you want to add to the cluster. See Getting Started for further instructions. Follow the Scylla installation procedure up to scylla.yaml configuration phase. Make sure that the Scylla version of the new node is identical to the other nodes in the cluster.

    If the node starts during the process, follow What to do if a Node Starts Automatically.

    Note

    Make sure to use the same Scylla patch release on the new/replaced node, to match the rest of the cluster. It is not recommended to add a new node with a different release to the cluster. For example, use the following for installing Scylla patch release (use your deployed version)

    • Scylla Enterprise - sudo yum install scylla-enterprise-2018.1.9

    • Scylla open source - sudo yum install scylla-3.0.3

    Note

    It’s important to keep I/O scheduler configuration in sync on nodes with the same hardware. That’s why we recommend skipping running scylla_io_setup when provisioning a new node with exactly the same hardware setup as existing nodes in the cluster.

    Instead, we recommend to copy the following files from an existing node to the new node after running scylla_setup and restart scylla-server service (if it is already running):
    • /etc/scylla.d/io.conf

    • /etc/scylla.d/io_properties.yaml

    Using different I/O scheduler configuration may result in unnecessary bottlenecks.

  2. On each node, edit the scylla.yaml file /etc/scylla/ to configure the parameters listed below.

    • cluster_name - Specifies the name of the cluster.

    • listen_address - Specifies the IP address that Scylla used to connect to the other Scylla nodes in the cluster.

    • endpoint_snitch - Specifies the selected snitch.

    • rpc_address - Specifies the address for client connections (Thrift, CQL).

    • seeds - Specifies the IP address of an existing node in the cluster. The new node will use this IP to connect to the cluster and learn the cluster topology and state.

  3. Start the nodes with the following command:

    sudo systemctl start scylla-server
    
    docker exec -it some-scylla supervisorctl start scylla
    

    (with some-scylla container already running)

  4. Verify that the nodes were added to the cluster using nodetool status command. Other nodes in the cluster will be streaming data to the new nodes, so the new nodes will be in Up Joining (UJ) status. Wait until the nodes’ status changes to Up Normal (UN) - the time depends on the data size and network bandwidth.

    Example:

    Nodes in the cluster are streaming data to the new node:

    Datacenter: DC1
    Status=Up/Down
    State=Normal/Leaving/Joining/Moving
    --  Address        Load       Tokens  Owns (effective)                         Host ID         Rack
    UN  192.168.1.201  112.82 KB  256     32.7%             8d5ed9f4-7764-4dbd-bad8-43fddce94b7c   B1
    UN  192.168.1.202  91.11 KB   256     32.9%             125ed9f4-7777-1dbn-mac8-43fddce9123e   B1
    UJ  192.168.1.203  124.42 KB  256     32.6%             675ed9f4-6564-6dbd-can8-43fddce952gy   B1
    

    Nodes in the cluster finished streaming data to the new node:

    Datacenter: DC1
    Status=Up/Down
    State=Normal/Leaving/Joining/Moving
    --  Address        Load       Tokens  Owns (effective)                         Host ID         Rack
    UN  192.168.1.201  112.82 KB  256     32.7%             8d5ed9f4-7764-4dbd-bad8-43fddce94b7c   B1
    UN  192.168.1.202  91.11 KB   256     32.9%             125ed9f4-7777-1dbn-mac8-43fddce9123e   B1
    UN  192.168.1.203  124.42 KB  256     32.6%             675ed9f4-6564-6dbd-can8-43fddce952gy   B1
    
  5. When the new node status is Up Normal (UN), run the nodetool cleanup command on all nodes in the cluster except for the new node that has just been added. Cleanup removes keys that were streamed to the newly added node and are no longer owned by the node.

    Note

    To prevent data resurrection, it’s essential to complete cleanup after adding nodes and before any node is decommissioned or removed. However, cleanup may consume significant resources. Use the following guideline to reduce cleanup impact:

    Tip 1: When adding multiple nodes, run the cleanup operations after all nodes are added on all nodes but the last one to be added.

    Tip 2: Postpone cleanup to low demand hours while ensuring it completes successfully before any node is decommissioned or removed.

    Tip 3: Run cleanup one node at a time, reducing overall cluster impact.

  6. Wait until the new node becomes UN (Up Normal) in the output of nodetool status on one of the old nodes.

  7. If you are using Scylla Monitoring, update the monitoring stack to monitor it. If you are using Scylla Manager, make sure you install the Manager Agent, and Manager can access it.

After Upgrading from 2024.1¶

The procedure described above applies to clusters where consistent topology updates are enabled. The feature is automatically enabled in new clusters.

If you’ve upgraded an existing cluster from version 2024.1, ensure that you manually enabled consistent topology updates. Without consistent topology updates enabled, you must consider the following limitations while applying the procedure:

  • You can only bootstrap one node at a time. You need to wait until the status of one new node becomes UN (Up Normal) before adding another new node.

  • If the node starts bootstrapping but fails in the middle, for example, due to a power loss, you can retry bootstrap by restarting the node. If you don’t want to retry, or the node refuses to boot on subsequent attempts, consult the Handling Membership Change Failures document.

  • The system_auth keyspace has not been upgraded to system. As a result, if authenticator is set to PasswordAuthenticator, you must increase the replication factor of the system_auth keyspace. It is recommended to set system_auth replication factor to the number of nodes in each DC.

Was this page helpful?

PREVIOUS
Create a ScyllaDB Cluster on EC2 (Single or Multi Data Center)
NEXT
Adding a New Data Center Into an Existing ScyllaDB Cluster
  • Create an issue

On this page

  • Adding a New Node Into an Existing ScyllaDB Cluster (Out Scale)
    • Prerequisites
      • Check the Status of Nodes
      • Collect Cluster Information
    • Procedure
    • After Upgrading from 2024.1
ScyllaDB Enterprise
  • 2024.2
    • 2024.2
    • 2024.1
    • 2023.1
    • 2022.2
  • Getting Started
    • Install ScyllaDB Enterprise
      • ScyllaDB Web Installer for Linux
      • Install ScyllaDB Without root Privileges
      • Air-gapped Server Installation
      • ScyllaDB Housekeeping and how to disable it
      • ScyllaDB Developer Mode
      • Launch ScyllaDB on AWS
      • Launch ScyllaDB on GCP
      • Launch ScyllaDB on Azure
    • Configure ScyllaDB
    • ScyllaDB Configuration Reference
    • ScyllaDB Requirements
      • System Requirements
      • OS Support by Linux Distributions and Version
      • Cloud Instance Recommendations
      • ScyllaDB in a Shared Environment
    • Migrate to ScyllaDB
      • Migration Process from Cassandra to Scylla
      • Scylla and Apache Cassandra Compatibility
      • Migration Tools Overview
    • Integration Solutions
      • Integrate Scylla with Spark
      • Integrate Scylla with KairosDB
      • Integrate ScyllaDB with Presto
      • Integrate Scylla with Elasticsearch
      • Integrate Scylla with Kubernetes
      • Integrate Scylla with the JanusGraph Graph Data System
      • Integrate Scylla with DataDog
      • Integrate Scylla with Kafka
      • Integrate Scylla with IOTA Chronicle
      • Integrate Scylla with Spring
      • Shard-Aware Kafka Connector for Scylla
      • Install Scylla with Ansible
      • Integrate Scylla with Databricks
      • Integrate Scylla with Jaeger Server
      • Integrate Scylla with MindsDB
    • Tutorials
  • ScyllaDB for Administrators
    • Administration Guide
    • Procedures
      • Cluster Management
      • Backup & Restore
      • Change Configuration
      • Maintenance
      • Best Practices
      • Benchmarking Scylla
      • Migrate from Cassandra to Scylla
      • Disable Housekeeping
    • Security
      • ScyllaDB Security Checklist
      • Enable Authentication
      • Enable and Disable Authentication Without Downtime
      • Creating a Custom Superuser
      • Generate a cqlshrc File
      • Reset Authenticator Password
      • Enable Authorization
      • Grant Authorization CQL Reference
      • Certificate-based Authentication
      • Role Based Access Control (RBAC)
      • ScyllaDB Auditing Guide
      • Encryption: Data in Transit Client to Node
      • Encryption: Data in Transit Node to Node
      • Generating a self-signed Certificate Chain Using openssl
      • Encryption at Rest
      • LDAP Authentication
      • LDAP Authorization (Role Management)
    • Admin Tools
      • Nodetool Reference
      • CQLSh
      • Admin REST API
      • Tracing
      • Scylla SStable
      • Scylla Types
      • SSTableLoader
      • cassandra-stress
      • SSTabledump
      • SSTableMetadata
      • Scylla Logs
      • Seastar Perftune
      • Virtual Tables
      • Reading mutation fragments
      • Maintenance socket
      • Maintenance mode
    • ScyllaDB Monitoring Stack
    • ScyllaDB Operator
    • ScyllaDB Manager
    • Upgrade Procedures
      • ScyllaDB Versioning
      • ScyllaDB Enterprise
      • ScyllaDB Open Source to ScyllaDB Enterprise
      • ScyllaDB Image
    • System Configuration
      • System Configuration Guide
      • scylla.yaml
      • ScyllaDB Snitches
    • Benchmarking ScyllaDB
    • ScyllaDB Diagnostic Tools
  • ScyllaDB for Developers
    • Develop with ScyllaDB
    • Tutorials and Example Projects
    • Learn to Use ScyllaDB
    • ScyllaDB Alternator
    • ScyllaDB Features
      • Lightweight Transactions
      • Global Secondary Indexes
      • Local Secondary Indexes
      • Materialized Views
      • Counters
      • Change Data Capture
      • Workload Attributes
      • Workload Prioritization
    • ScyllaDB Drivers
      • Scylla CQL Drivers
      • Scylla DynamoDB Drivers
  • CQL Reference
    • CQLSh: the CQL shell
    • Appendices
    • Compaction
    • Consistency Levels
    • Consistency Level Calculator
    • Data Definition
    • Data Manipulation
      • SELECT
      • INSERT
      • UPDATE
      • DELETE
      • BATCH
    • Data Types
    • Definitions
    • Global Secondary Indexes
    • Expiring Data with Time to Live (TTL)
    • Functions
    • Wasm support for user-defined functions
    • JSON Support
    • Materialized Views
    • Non-Reserved CQL Keywords
    • Reserved CQL Keywords
    • DESCRIBE SCHEMA
    • Service Levels
    • ScyllaDB CQL Extensions
  • ScyllaDB Architecture
    • Data Distribution with Tablets
    • ScyllaDB Ring Architecture
    • ScyllaDB Fault Tolerance
    • Consistency Level Console Demo
    • ScyllaDB Anti-Entropy
      • Scylla Hinted Handoff
      • Scylla Read Repair
      • Scylla Repair
    • SSTable
      • ScyllaDB SSTable - 2.x
      • ScyllaDB SSTable - 3.x
    • Compaction Strategies
    • Raft Consensus Algorithm in ScyllaDB
  • Troubleshooting ScyllaDB
    • Errors and Support
      • Report a Scylla problem
      • Error Messages
      • Change Log Level
    • ScyllaDB Startup
      • Ownership Problems
      • Scylla will not Start
      • Scylla Python Script broken
    • Upgrade
      • Inaccessible configuration files after ScyllaDB upgrade
    • Cluster and Node
      • Handling Node Failures
      • Failure to Add, Remove, or Replace a Node
      • Failed Decommission Problem
      • Cluster Timeouts
      • Node Joined With No Data
      • SocketTimeoutException
      • NullPointerException
      • Failed Schema Sync
    • Data Modeling
      • Scylla Large Partitions Table
      • Scylla Large Rows and Cells Table
      • Large Partitions Hunting
      • Failure to Update the Schema
    • Data Storage and SSTables
      • Space Utilization Increasing
      • Disk Space is not Reclaimed
      • SSTable Corruption Problem
      • Pointless Compactions
      • Limiting Compaction
    • CQL
      • Time Range Query Fails
      • COPY FROM Fails
      • CQL Connection Table
    • ScyllaDB Monitor and Manager
      • Manager and Monitoring integration
      • Manager lists healthy nodes as down
  • Knowledge Base
    • Upgrading from experimental CDC
    • Compaction
    • Consistency in ScyllaDB
    • Counting all rows in a table is slow
    • CQL Query Does Not Display Entire Result Set
    • When CQLSh query returns partial results with followed by “More”
    • Run Scylla and supporting services as a custom user:group
    • Customizing CPUSET
    • Decoding Stack Traces
    • Snapshots and Disk Utilization
    • DPDK mode
    • Debug your database with Flame Graphs
    • Efficient Tombstone Garbage Collection in ICS
    • How to Change gc_grace_seconds for a Table
    • Gossip in Scylla
    • Increase Permission Cache to Avoid Non-paged Queries
    • How does Scylla LWT Differ from Apache Cassandra ?
    • Map CPUs to Scylla Shards
    • Scylla Memory Usage
    • NTP Configuration for Scylla
    • Updating the Mode in perftune.yaml After a ScyllaDB Upgrade
    • POSIX networking for Scylla
    • Scylla consistency quiz for administrators
    • Recreate RAID devices
    • How to Safely Increase the Replication Factor
    • Scylla and Spark integration
    • Increase Scylla resource limits over systemd
    • Scylla Seed Nodes
    • How to Set up a Swap Space
    • Scylla Snapshots
    • Scylla payload sent duplicated static columns
    • Stopping a local repair
    • System Limits
    • How to flush old tombstones from a table
    • Time to Live (TTL) and Compaction
    • Scylla Nodes are Unresponsive
    • Update a Primary Key
    • Using the perf utility with Scylla
    • Configure Scylla Networking with Multiple NIC/IP Combinations
  • Reference
    • AWS Images
    • Azure Images
    • GCP Images
    • Configuration Parameters
    • Glossary
    • Limits
    • ScyllaDB Enterprise vs. Open Source Matrix
    • API Reference (BETA)
    • Metrics (BETA)
  • ScyllaDB University
  • ScyllaDB FAQ
  • Contribute to ScyllaDB
  • Alternator: DynamoDB API in Scylla
    • Getting Started With ScyllaDB Alternator
    • ScyllaDB Alternator for DynamoDB users
    • Alternator-specific APIs
Docs Tutorials University Contact Us About Us
© 2025, ScyllaDB. All rights reserved. | Terms of Service | Privacy Policy | ScyllaDB, and ScyllaDB Cloud, are registered trademarks of ScyllaDB, Inc.
Last updated on 12 May 2025.
Powered by Sphinx 7.4.7 & ScyllaDB Theme 1.8.6