Was this page helpful?
Caution
You're viewing documentation for an unstable version of ScyllaDB Enterprise. Switch to the latest stable version.
A state where data is in order and organized. ScyllaDB has processes in place to make sure that data is antientropic where all replicas contain the most recent data and that data is consistent between replicas. See ScyllaDB Anti-Entropy.
When a new node is added to a cluster, the bootstrap process ensures that the data in the cluster is automatically redistributed to the new node. A new node in this case is an empty node without system tables or data. See bootstrap.
The CAP Theorem is the notion that C (Consistency), A (Availability) and P (Partition Tolerance) of data are mutually dependent in a distributed system. Increasing any 2 of these factors will reduce the third. ScyllaDB chooses availability and partition tolerance over consistency. See Fault Tolerance.
One or multiple ScyllaDB nodes, acting in concert, which own a single contiguous token range. State is communicated between nodes in the cluster via the Gossip protocol. See Ring Architecture.
A single or multi-column clustering key determines a row’s uniqueness and sort order on disk within a partition. See Ring Architecture.
See table.
The process of reading several SSTables, comparing the data and time stamps and then writing one SSTable containing the merged, most recent, information. See Compaction Strategies.
Determines which of the SSTables will be compacted, and when. See Compaction Strategies.
A dynamic value which dictates the number of replicas (in a cluster) that must acknowledge a read or write operation. This value is set by the client on a per operation basis. For the CQL Shell, the consistency level defaults to ONE for read and write operations. See Consistency Levels.
Cache dummy rows are entries in the row set, which have a clustering position, although they do not represent CQL rows written by users. ScyllaDB cache uses them to mark boundaries of population ranges, to represent the information that the whole range is complete, and there is no need to go to sstables to read the gaps between existing row entries when scanning.
A state where data is not consistent. This is the result when replicas are not synced and data is random. ScyllaDB has measures in place to be antientropic. See ScyllaDB Anti-Entropy.
In ScyllaDB, when considering the CAP Theorem, availability and partition tolerance are considered a higher priority than consistency.
A short record of a write request that is held by the co-ordinator until the unresponsive node becomes responsive again, at which point the write request data in the hint is written to the replica node. See Hinted Handoff.
Reduces data inconsistency which can occur when a node is down or there is network congestion. In ScyllaDB, when data is written and there is an unresponsive replica, the coordinator writes itself a hint. When the node recovers, the coordinator sends the node the pending hints to ensure that it has the data it should have received. See Hinted Handoff.
Denoting an element of a set which is unchanged in value when multiplied or otherwise operated on by itself. ScyllaDB Counters are not indepotent because in the case of a write failure, the client cannot safely retry the request.
JBOD or Just another Bunch Of Disks is a non-raid storage system using a server with multiple disks in order to instantiate a separate file system per disk. The benefit is that if a single disk fails, only it needs to be replaced and not the whole disk array. The disadvantage is that free space and load may not be evenly distributed. See the FAQ.
KMIP is a communication protocol that defines message formats for storing keys on a key management server (KMIP server). You can use a KMIP server to protect your keys when using Encryption at Rest. See Encryption at Rest.
A collection of tables with attributes which define how data is replicated on nodes. See Ring Architecture.
LCS uses small, fixed-size (by default 160 MB) SSTables divided into different levels. See Compaction Strategies.
The ability to update a configuration property without restarting the node. Properties that support live updates can be updated via the system.config
virtual table or the REST API.
The change will take effect without a node restart, changing the value in the config file, then sending SIGHUP
to the scylla-process, triggering it to re-read its configuration.
A technique of keeping sorted files and merging them. LSM is a data structure that maintains key-value pairs. See Compaction
A hyperthreaded core on a hyperthreaded system, or a physical core on a system without hyperthreading.
An in-memory data structure servicing both reads and writes. Once full, the Memtable flushes to an SSTable. See Compaction Strategies.
A hash function created by Austin Appleby, and used by the Partitioner to distribute the partitions between nodes. The name comes from two basic operations, multiply (MU) and rotate (R), used in its inner loop. The MurmurHash3 version used in ScyllaDB originated from Apache Cassandra, and is not identical to the official MurmurHash3 calculation. More here.
A change to data such as column or columns to insert, or a deletion. See Hinted Handoff.
A single installed instance of ScyllaDB. See Ring Architecture.
A simple command-line interface for administering a ScyllaDB node. A nodetool command can display a given node’s exposed operations and attributes. ScyllaDB’s nodetool contains a subset of these operations. See Ring Architecture.
A subset of data that is stored on a node and replicated across nodes. There are two ways to consider a partition. In CQL, a partition appears as a group of sorted rows, and is the unit of access for queried data, given that most queries access a single partition. On the physical layer, a partition is a unit of data stored on a node and is identified by a partition key. See Ring Architecture.
The unique identifier for a partition, a partition key may be hashed from the first column in the primary key. A partition key may also be hashed from a set of columns, often referred to as a compound primary key. A partition key determines which virtual node gets the first partition replica. See Ring Architecture.
A hash function for computing which data is stored on which node in the cluster. The partitioner takes a partition key as an input, and returns a ring token as an output. By default ScyllaDB uses the 64 bit MurmurHash3 function and this hash range is numerically represented as a signed 64bit integer, see Ring Architecture.
In a CQL table definition, the primary key clause specifies the partition key and optional clustering key. These keys uniquely identify each partition and row within a partition. See Ring Architecture.
Quorum is a global consistency level setting across the entire cluster including all data centers. See Consistency Levels.
Excessive read requests which require many SSTables. RA is calculated by the number of disk reads per query. High RA occurs when there are many pages to read in order to answer a query. See Compaction Strategies.
A read operation occurs when an application gets information from an SSTable and does not change that information in any way. See Fault Tolerance.
An anti-entropy mechanism for read operations ensuring that replicas are updated with most recently updated data. These repairs run automatically, asynchronously, and in the background. See ScyllaDB Read Repair.
A verification phase during a data migration where the target data is compared against original source data to ensure that the migration architecture has transferred the data correctly. See ScyllaDB Read Repair.
A process which runs in the background and synchronizes the data between nodes, so that eventually, all the replicas hold the same data. See ScyllaDB Repair.
RBNO is an internal ScyllaDB mechanism that uses repair to synchronize data between the nodes in a cluster instead of using streaming. RBNO significantly improve database performance and data consistency.
RBNO is enabled by default for a subset node operations. See Repair Based Node Operations for details.
The process of replicating data across nodes in a cluster. See Fault Tolerance.
The total number of replica nodes across a given cluster. An RF of 1 means that the data will only exist on a single node in the cluster and will not have any fault tolerance. This number is a setting defined for each keyspace. All replicas share equal priority; there are no primary or master replicas. An RF for any table, can be defined for each DC. See Fault Tolerance.
Rewrite a set of SSTables to satisfy a compaction strategy’s criteria. For example, restoring data from an old backup or before the strategy update.
Splitting an SSTable, that is owned by more than one shard (core), into SSTables that are owned by a single shard. For example: when restoring data from a different server, importing SSTables from Apache Cassandra, or changing the number of cores in a machine (upscale).
Each ScyllaDB node is internally split into shards, an independent thread bound to a dedicated core. Each shard of data is allotted CPU, RAM, persistent storage, and networking resources which it uses as efficiently as possible. See ScyllaDB Shard per Core Architecture for more information.
Dropping requests to protect the system. This will occur if the request is too large or exceeds the max number of concurrent requests per shard.
Triggers when the system has enough (four by default) similarly sized SSTables. See Compaction Strategies.
Snapshots in ScyllaDB are an essential part of the backup and restore mechanism. Whereas in other databases a backup starts with creating a copy of a data file (cold backup, hot backup, shadow copy backup), in ScyllaDB the process starts with creating a table or keyspace snapshot. See ScyllaDB Snapshots.
The mapping from the IP addresses of nodes to physical and virtual locations, such as racks and data centers. There are several types of snitches. The type of snitch affects the request routing mechanism. See ScyllaDB Snitches.
Excessive disk space usage which requires that the disk be larger than a perfectly-compacted representation of the data (i.e., all the data in one single SSTable). SA is calculated as the ratio of the size of database files on a disk to the actual data size. High SA occurs when there is more disk space being used than the size of the data. See Compaction Strategies.
A concept borrowed from Google Big Table, SSTables or Sorted String Tables store a series of immutable rows where each row is identified by its row key. See Compaction Strategies. The SSTable format is a persistent file format. See ScyllaDB SSTable Format.
A collection of columns fetched by row. Columns are ordered by Clustering Key. See Ring Architecture.
TWCS is designed for time series data. See Compaction Strategies.
A value in a range, used to identify both nodes and partitions. Each node in a ScyllaDB cluster is given an (initial) token, which defines the end of the range a node handles. See Ring Architecture.
The total range of potential unique identifiers supported by the partitioner. By default, each ScyllaDB node in the cluster handles 256 token ranges. Each token range corresponds to a Vnode. Each range of hashes in turn is a segment of the total range of a given hash function. See Ring Architecture.
A marker that indicates that data has been deleted. A large number of tombstones may impact read performance and disk usage, so an efficient tombstone garbage collection strategy should be employed. See Tombstones GC options.
The possibility for unique, per-query, Consistency Level settings. These are incremental and override fixed database settings intended to enforce data consistency. Such settings may be set directly from a CQL statement when response speed for a given query or operation is more important. See Fault Tolerance.
A range of tokens owned by a single ScyllaDB node. ScyllaDB nodes are configurable and support a set of Vnodes. In legacy token selection, a node owns one token (or token range) per node. With Vnodes, a node can own many tokens or token ranges; within a cluster, these may be selected randomly from a non-contiguous set. In a Vnode configuration, each token falls within a specific token range which in turn is represented as a Vnode. Each Vnode is then allocated to a physical node in the cluster. See Ring Architecture.
A database category that allows you to manage different sources of database activities, such as requests or administrative activities. By defining workloads, you can specify how ScyllaDB will process those activities. For example, you can prioritize one workload over another (e.g., user requests over administrative activities). See Workload Prioritization.
Excessive compaction of the same data. WA is calculated by the ratio of bytes written to storage versus bytes written to the database. High WA occurs when there are more bytes/second written to storage than are actually written to the database. See Compaction Strategies.
A write operation occurs when information is added or removed from an SSTable. See Fault Tolerance.
Was this page helpful?