System Architecture
Tenzro Network is a modular protocol designed for the AI age, providing intelligence (AI model inference) and security (TEE enclaves, ZK proofs) with on-chain settlement via Tenzro Ledger.
3-Tier Network Architecture
Tenzro Network supports three tiers of participation. Every tier is a full network participant with a TDIP identity and MPC wallet — the tiers differ in how much infrastructure they run locally.
| Feature | MicroNode | Light Client | Validator |
|---|---|---|---|
| Installation | Zero — single RPC call | Light node binary | Full node binary |
| TDIP Identity | Auto-provisioned | Self-managed | Self-managed |
| MPC Wallet | Auto-provisioned | Self-managed | Self-managed |
| Block Sync | Via remote RPC | Header sync only | Full block sync |
| Consensus | No | No | Yes (BFT consensus) |
| Block Production | No | No | Yes |
| TNZO Staking Required | No | No | Yes |
| Network Capabilities | 10 (inference, payments, agents, MCP, tasks, chain queries, contracts, TEE, bridge, governance) | RPC queries + transactions | All capabilities + block rewards |
| Ideal For | AI agents, apps, quick onboarding | Wallets, dApps | Infrastructure providers |
Tier 1: MicroNode
A MicroNode is a zero-install, full-participant network identity provisioned via a single JSON-RPC call. No binary to install, no P2P daemon to run. Call tenzro_joinAsMicroNode and the network auto-provisions a TDIP DID (did:tenzro:human:{uuid}), an MPC 2-of-3 threshold wallet, and 10 network capabilities.
MicroNodes are ideal for AI agents, browser applications, and any participant that needs network access without running local infrastructure. They connect to the network exclusively via JSON-RPC and A2A/MCP protocols.
JSON-RPC — Join as MicroNode
// Single call — returns DID, wallet, 10 capabilities, endpoints, chain ID
{
"jsonrpc": "2.0",
"method": "tenzro_joinAsMicroNode",
"params": {
"display_name": "Alice",
"origin": "app",
"participant_type": "human"
},
"id": 1
}
// Response
{
"identity": {
"did": "did:tenzro:human:3f8a...",
"display_name": "Alice",
"identity_type": "human",
"status": "active"
},
"wallet": {
"wallet_id": "wallet-uuid",
"address": "0x1234...",
"wallet_type": "mpc_2of3",
"balance": "0"
},
"capabilities": {
"inference": true,
"payments": true,
"agent_collaboration": true,
"mcp_tools": true,
"task_execution": true,
"chain_query": true,
"smart_contracts": true,
"tee_services": true,
"bridge": true,
"governance": true
},
"network": {
"rpc": "https://rpc.tenzro.network",
"mcp": "https://mcp.tenzro.network/mcp",
"a2a": "https://a2a.tenzro.network"
},
"is_micro_node": true,
"chain_id": 1337
}Tier 2: Light Client
A Light Client runs a lightweight node binary that syncs block headers (not full blocks) from the network. It has full JSON-RPC access for transaction submission and state queries, but does not participate in consensus or produce blocks. Light clients are suitable for wallets and dApps that need cryptographic verification of chain state without the overhead of a full node.
Tier 3: Validator
Validators run the full tenzro-node binary and participate in BFT consensus. They produce blocks, vote on proposals, and earn TNZO block rewards. Validators must stake TNZO tokens and maintain high uptime. TEE-attested validators receive 2× weight in leader selection. Equivocating validators are automatically slashed (10% of stake).
Layered Architecture
The system is organized into distinct layers, from user interfaces down to cryptographic primitives.
Layer 1: User Interfaces
- Desktop App: Tauri + React with Tailwind CSS 4, OKLCH color system
- CLI: Command-line tool for node operation, wallet management, governance
- SDKs: Rust SDK and TypeScript SDK for programmatic access
Layer 2: Node Core
- JSON-RPC Server: Ethereum-compatible RPC (default: 127.0.0.1:8545)
- Web Verification API: REST endpoints for ZK/TEE verification (default: 0.0.0.0:8080)
- Orchestration: Coordinates all subsystems, manages node lifecycle
Layer 3: Core Subsystems
- Network: libp2p P2P with gossipsub, Kademlia DHT, validator authentication
- Consensus: BFT consensus with TEE-weighted leader selection, equivocation detection, automated slashing
- Multi-VM: EVM, SVM (Solana), DAML execution with Block-STM parallel execution
- Storage: RocksDB with Merkle Patricia Trie, snapshots
- Model Registry: AI model catalog, inference routing with circuit breakers for provider health monitoring, pricing
Layer 4: Supporting Infrastructure
- Crypto: Ed25519, Secp256k1, AES-GCM, X25519, MPC, BLS12-381
- TEE: Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA GPU attestation
- ZK Proofs: Groth16 SNARKs on BN254, GPU-accelerated proving
- Wallet: MPC threshold wallets (2-of-3), multi-asset support
- Token (TNZO): Token economics, staking, governance, liquid staking (stTNZO)
- Identity: TDIP protocol, W3C DIDs, verifiable credentials
- Payments: MPP, x402, Tempo integration, AP2 agent payment sessions, nanopayment channels for per-token billing
- Settlement: Escrow, micropayment channels, nanopayment batch flush, batch processing
- Agent: A2A protocol, MCP bridge, capability registry
- Bridge: LayerZero, Chainlink CCIP, deBridge, Canton adapters, ERC-7802 cross-chain token interface
Modular Architecture
The system is composed of independently testable modules with strict dependency boundaries. Foundation modules have zero internal dependencies, domain modules build on top, and orchestration modules tie everything together.
Key principles:
- Foundation modules (Types, Crypto, Storage) have minimal dependencies
- Domain modules (Identity, Payments, Agent) build on foundation
- Orchestration modules (Node, CLI) compose all domain modules
- Circular dependencies are strictly forbidden
- Feature flags gate optional subsystems (TEE providers, bridge adapters)
Data Flow
Understanding how data flows through the system helps clarify component interactions.
Transaction Submission Flow
- User submits transaction via JSON-RPC (
eth_sendRawTransaction) - Node validates signature and nonce cryptographically
- Transaction enters mempool in the consensus layer
- Leader selects transactions for next block (TEE-weighted selection)
- Block proposed in BFT consensus PREPARE phase
- Validators vote, block enters COMMIT phase with equivocation detection
- Block finalized in DECIDE phase (equivocating validators automatically slashed)
- Multi-VM execution routes to EVM/SVM/DAML based on transaction type
- State changes written to persistent storage
- Transaction receipt returned to user
Inference Request Flow
- User discovers models via the model registry
- Inference request routed by strategy (price/latency/reputation)
- Payment challenge created using HTTP 402 protocols (MPP/x402)
- User signs payment credential using their MPC wallet
- Provider verifies credential and performs inference
- Result returned with ZK proof of correctness
- Settlement created with escrow
- User verifies proof via Web Verification API
- Escrow released, provider receives payment in TNZO
- Network fee (0.5%) routed to treasury
Identity Registration Flow
- User calls
register_identityvia RPC - System generates DID (
did:tenzro:human:{uuid}) - Auto-provisions MPC 2-of-3 threshold wallet
- W3C DID Document created with public key, services, verification methods
- Identity stored in the identity registry
- Registration transaction submitted to Tenzro Ledger
- DID anchored on-chain for global resolution
Storage Architecture
Persistent state is managed with multiple stores for different data types.
| Column Family | Data Stored | Key Format |
|---|---|---|
| CF_BLOCKS | Block headers and bodies | block_height → Block |
| CF_STATE | Account state, contract storage | state_key → state_value |
| CF_ACCOUNTS | Account balances, nonces | address → Account |
| CF_TRANSACTIONS | Transaction data and receipts | tx_hash → Transaction |
| CF_METADATA | Chain metadata (height, state root) | metadata_key → value |
| CF_SNAPSHOTS | Compressed state snapshots | snapshot_id → compressed_state |
Merkle Patricia Trie: State is organized in a Merkle Patricia Trie for efficient proof generation and verification. The state root is included in every block header.
Snapshots: Periodic snapshots enable fast sync for new nodes. Retention policy keeps the last 100 snapshots.
Cache: 1 GB in-memory cache for hot state data.
Key Design Principles
Modularity
Each component is a self-contained module with well-defined interfaces. This enables independent development, testing, and potential replacement of subsystems without affecting the entire system.
Interface Abstraction
Core interfaces are defined with clear contracts: VM executors, bridge adapters, TEE providers, and payment protocols all implement standard interfaces. This allows multiple implementations (e.g., different TEE providers) to coexist.
Async-First Architecture
All I/O operations use asynchronous processing. Network, storage, and RPC handlers are fully asynchronous for maximum throughput.
Concurrent State Management
Shared state uses concurrent data structures for efficient parallel access.
Builder Pattern for Configuration
Configuration uses builder pattern methods for flexible setup, allowing you to customize chain ID, gas limits, and other parameters.
Feature Flags for Optional Subsystems
Platform-specific code (TEE providers, bridge adapters) is gated behind feature flags. This allows compilation on platforms that don't support certain hardware (e.g., Intel TDX on ARM).
Performance Characteristics
| Component | Throughput / Latency | Notes |
|---|---|---|
| BFT consensus Consensus | 10,000+ TPS, sub-second finality (~500ms) | O(n) communication, 2-phase commit |
| Block-STM Execution | 2-10x speedup on parallel workloads | Conflict detection, sequential fallback |
| ZK Proof Generation | 100-1000ms per proof (CPU) | GPU acceleration: 10-100x faster |
| TEE Attestation | 50-200ms | Hardware-dependent |
| RocksDB Read | 1-10ms | 1GB cache reduces latency |
| RocksDB Write | 10-50ms | WAL + memtable batching |
| libp2p Gossipsub | 1000+ msgs/sec per topic | Mesh topology, fanout 8 |