Decentralized Database (DDB) – Dual-Consensus Architecture

Ensuring Performance and Scalability

Estimated reading: 4 minutes 28 views

Integrating a database could raise concerns about performance, but NCOG’s dual-consensus approach is optimized to minimize overhead: 

  • Parallelism: Database operations execute on the DDB subset in parallel to the main chain’s normal transaction processing. While DDB nodes are busy endorsing a heavy transaction, other validators can continue finalizing unrelated transactions. This pipelining means the throughput of the system is roughly the sum of regular TPS + database TPS (subject to network limits). The architecture was designed such that the DAG consensus and the DDB consensus work concurrently and converge at the finality step, rather than serially blocking each other. 
  • No Redundant Execution: Only the DDB validators actually run the SQL logic. Non-DDB validators skip that work, which saves a huge amount of computation. If there are 100 validators but 10 are DDB nodes, 90 nodes never execute the heavy transaction – they only verify a signature on its outcome. This greatly improves overall throughput since it avoids burdening every node with every operation. The DDB committee can also be scaled (e.g., more powerful hardware or more nodes) independently of the rest of the network. 
  • Write-Ahead Log (WAL) usage: By using the WAL for replication, NCOG avoids re-computing transactions on each node. Applying a WAL record is much faster than parsing high-level queries. It also guarantees that each node’s database ends up bit-for-bit identical after the commit. This is how traditional database replicas achieve consistency, now applied in a Byzantine setting. The WAL approach is a key reason NCOG can integrate a database without massive slowdown. 
  • Batching and Optimizations: The DDB validators can batch multiple small updates together before endorsing, amortizing consensus overhead. PostgreSQL’s WAL also groups changes from a transaction, so a complex procedure with many writes still yields a single package to endorse. In tests of BFT database systems, thousands of writes per second are achievable on a small committee. Furthermore, since reads do not require consensus (any node can read its local state), read scalability is high – you can horizontally scale read replicas or query load balancers among DDB nodes for handling read-only queries, with the assurance that all are up-to-date with the last finalized block. 
  • Gas and Fees for DB ops: Each database transaction costs a certain gas fee (paid in the $NEC token) proportional to the resources it uses (e.g., rows read/written, execution time). This works like fees for smart contracts. It ensures that heavy operations pay their fair share and prevents spamming. DDB validators receive a portion of these fees as reward for their extra work, incentivizing them to run high-performance servers. If a transaction would exceed the allotted gas (too large a query or too slow), the execution is aborted to avoid locking the system. This mechanism keeps the database usage sustainable and predictable. 
  • Consistency and Conflict Handling: In the rare case two database transactions conflict (e.g., two concurrent updates to the same record), the first one finalized will make the second’s read assumptions invalid. The main chain will detect this via versioning and mark the second as failed (it will not be applied). This resembles how, in Fabric, a transaction can be invalidated in commit phase if its read-set was altered by a prior tx. Thus, serializability (one transaction at a time for a given piece of data) is maintained. From a developer perspective, they might need to catch a failure and retry the transaction, but the platform ensures no inconsistent double-writes occur. 

Overall, this dual consensus design allows NCOG Earth Chain to support rich data-centric applications without sacrificing throughput. It provides the performance of a specialized database when handling data, yet every change still goes through the rigor of blockchain consensus. Many optimizations (like WAL, batching, parallel execution in the DAG) work in concert to keep the integration efficient. 

Share this Doc

Ensuring Performance and Scalability

Or copy link

CONTENTS