Sharded Archive Nodes
The Scaling Challenge
Obsidian is designed for massive message volume:
Projected daily messages: 9+ million
Average message size: ~1 KB
Daily data growth: ~9 GB
Yearly data growth: ~3.3 TB
5-year projection: ~16 TBTraditional archive nodes store everything. That works for Bitcoin (~500 GB total) but not for a high-throughput message chain.
The Sharded Solution
Instead of every archive storing all data, Obsidian divides history into epoch ranges and assigns them to shard groups:
┌─────────────────────────────────────────────────────────────┐
│ Complete Chain History │
├─────────────┬─────────────┬─────────────┬─────────────┬─────┤
│ Epochs │ Epochs │ Epochs │ Epochs │ │
│ 0-1000 │ 1001-2000 │ 2001-3000 │ 3001-4000 │ ... │
├─────────────┼─────────────┼─────────────┼─────────────┼─────┤
│ Shard │ Shard │ Shard │ Shard │ │
│ Group 1 │ Group 2 │ Group 3 │ Group 4 │ ... │
└─────────────┴─────────────┴─────────────┴─────────────┴─────┘
│ │ │ │
▼ ▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│Node A │ │Node D │ │Node G │ │Node J │
│Node B │ │Node E │ │Node H │ │Node K │
│Node C │ │Node F │ │Node I │ │Node L │
└────────┘ └────────┘ └────────┘ └────────┘
Each shard group has multiple nodes for redundancyHow It Works
Epoch Ranges
An epoch is a consensus time period (~6.4 minutes, 32 slots). Shard groups are responsible for ranges:
Group 1
0 - 1,000
Genesis → Day 4.4
Group 2
1,001 - 2,000
Day 4.4 → Day 8.9
Group 3
2,001 - 3,000
Day 8.9 → Day 13.3
...
...
...
Group N
(N-1)×1000+1 - N×1000
Rolling window
Node Assignment
When you register as a sharded archive:
Choose your shard group (or get assigned)
Download that epoch range from existing archives
Serve queries for your assigned range
Earn rewards proportional to your coverage
Query Routing
When someone queries historical data:
The network maintains a registry of which nodes serve which ranges.
Benefits
For Node Operators
Lower hardware
Store 1/N of total history
Predictable scope
Fixed epoch range, known size
Easier entry
Join without downloading TB of data
Flexible commitment
Run multiple shards as capacity grows
For the Network
More operators
Lower barrier = more participation
Better distribution
Geographic and operator diversity
Redundancy per shard
Multiple nodes per range
Scalable
Add shards as history grows
Storage Requirements
Full Archive
Sharded Archive (1,000 epochs)
A sharded archive operator can run on a modest VPS instead of dedicated hardware.
Reward Distribution
Rewards flow based on coverage:
Full archives earn more per node, but sharded archives earn proportionally with much lower costs.
Running a Sharded Archive
Step 1: Choose Your Shard
Newer shards typically have fewer nodes = higher rewards per operator.
Step 2: Sync Your Range
Step 3: Register On-Chain
Step 4: Serve Queries
Your node automatically responds to queries for its epoch range.
Step 5: Claim Rewards
Shard Lifecycle
As the chain grows, new shards are created:
Early shards become "historical" — frozen data, stable requirements. Latest shard is "active" — still growing until range fills.
Future: Shard Migration
As older shards become less queried, operators can:
Stay: Continue serving historical data (lower query load, steady rewards)
Migrate: Move to newer, more active shards
Expand: Add additional shard coverage
The market determines where operators focus based on query demand and reward rates.
Next: Running an Archive Node — Technical setup guide
Last updated