Silica Protocol
The Silica Protocol defines how message data is propagated, validated, and made available across the chain. It decouples data availability from transaction execution, allowing the network to process massive amounts of message data without slowing down block propagation.
The Challenge
Traditional blockchains place all data on the consensus critical path. Every byte included in a block must be propagated quickly enough for validators to verify and vote, tightly coupling data throughput to block propagation latency. As block sizes increase, this coupling becomes the primary limit on scalability.
Obsidian's Solution
Most DA solutions use Data Availability Sampling (DAS)—light clients randomly sample chunks to probabilistically verify availability without downloading everything.
Obsidian takes a different approach: the acceptance criteria for messages is committee attestation, and non-committee nodes trust that attestation or can optionally sample for additional confidence.
Block acceptance
Requires successful sampling
Requires committee QC (2/3 threshold)
Primary verification
Light clients sample randomly
Committee members prove chunk possession (PoP)
Parallelism
Single blob space per block
Multiple lanes with dedicated committees
Secondary confidence
N/A
Non-committee nodes can sample (optional)
The Key Difference
In Obsidian, a block is valid once the lane committee reaches quorum on the Availability Certificate. Non-committee nodes don't need to sample to accept the block—they trust the committee attestation.
Life of a Message (Data Plane)
1. Routing to a Lane
All messages are deterministically routed to lanes based on sender address:
This ensures all messages from a single sender go to the same lane. The primary benefit is simplicity: nodes can immediately determine which lane committee should receive a message without knowing the current slot or RANDAO state.
2. Batch Construction
Each lane has a designated Lane Leader for the slot. The leader:
Collects messages from the lane buffer
Constructs a batch
Computes the Micro Root (Merkle root of message identifiers—proves ordering)
Generates the Data Commitment (Merkle root of erasure-coded chunks—proves content)
3. Erasure Coding (The "Sidecar")
The batch is encoded using Reed-Solomon erasure coding, producing redundant chunks (default: K=32 data chunks, N=64 total chunks).
Any 32 chunks are enough to reconstruct the full data
Chunks are distributed to the Lane Committee
This allows the network to tolerate missing chunks without losing data
4. Proof-of-Possession (PoP)
Committee members receive their chunks and verify them against the Data Commitment. Each validator proves possession of their assigned chunks prior to voting. If valid, they sign an availability vote.
5. Availability Certificate
Once a supermajority (2/3) of the committee has voted, the votes are aggregated into an Availability Certificate (AC).
An Availability Certificate contains:
Reference to the lane batch (slot, lane, sequence)
Data commitment (merkle root of chunks)
Aggregated committee signatures
Signer bitmap (which validators attested)
Inclusion in the Chain (Control Plane)
The block proposer collects lane headers and availability certificates from all lanes. These are included in the canonical block body. The proposer does not need the full payload data—only the compact commitments and proofs.
At Slot N, the Block Proposer:
Collects aggregated ACs from the Lane Leaders
Includes the LaneBatchHeader (containing the data commitment) in the block body
Includes the AC to prove the data is available
Crucially: The Block Proposer does not need to download the full message data. They only need the header and the certificate. This ensures blocks can be built and propagated quickly, even if sidecar data is large.
Data Availability Guarantee
By including an AC, the protocol guarantees that the data was available at inclusion time. A valid certificate proves that a supermajority of the lane committee possessed the data at the time of signing. Combined with erasure coding, this guarantees reconstructability.
Retention Window
Committee members are obligated to serve data for a defined retention window after inclusion:
Serve Window: Nodes are obligated to serve chunks for the retention window (default: 32 slots)
Archival: After the window expires, data transitions to Archive Nodes for permanent storage
Pruning: Light nodes and non-archival validators can prune sidecars after the window, keeping their storage footprint low
Last updated