Tenzro Testnet is live. Get testnet TNZO

System Architecture

Tenzro Network is a modular protocol designed for the AI age, providing intelligence (AI model inference) and security (TEE enclaves, ZK proofs) with on-chain settlement via Tenzro Ledger L1.

Layered Architecture

The system is organized into distinct layers, from user interfaces down to cryptographic primitives.

┌─────────────────────────────────────┐
│         User Interfaces            │
│   Desktop / CLI / SDKs / APIs      │
└──────────────┬──────────────────────┘
               │ JSON-RPC + HTTP
┌──────────────▼──────────────────────┐
│         Tenzro Node                 │
│   RPC Server + Verification API    │
└──────────────┬──────────────────────┘
               │
  ┌────────┬───┼───────┬──────────┐
  │        │   │       │          │
┌─▼──┐ ┌──▼──┐ ┌──▼──┐ ┌──▼──┐ ┌──▼──┐
│ P2P│ │BFT  │ │Multi│ │Store│ │Model│
│Net │ │Cons.│ │ VM  │ │     │ │ Reg │
└────┘ └─────┘ └─────┘ └─────┘ └─────┘
  │        │       │       │       │
┌─▼────────▼───────▼───────▼───────▼──┐
│      Supporting Infrastructure      │
│  Crypto · TEE · ZK · Wallet · TNZO │
│  Identity · Payments · Settlement   │
│  Agent · Bridge                     │
└─────────────────────────────────────┘

Layer 1: User Interfaces

  • Desktop App: Tauri + React with Tailwind CSS 4, OKLCH color system
  • CLI: Command-line tool for node operation, wallet management, governance
  • SDKs: Rust SDK and TypeScript SDK for programmatic access

Layer 2: Node Core

  • JSON-RPC Server: Ethereum-compatible RPC (default: 127.0.0.1:8545)
  • Web Verification API: REST endpoints for ZK/TEE verification (default: 0.0.0.0:8080)
  • Orchestration: Coordinates all subsystems, manages node lifecycle

Layer 3: Core Subsystems

  • Network: libp2p P2P with gossipsub, Kademlia DHT, validator authentication
  • Consensus: HotStuff-2 BFT with TEE-weighted leader selection, equivocation detection, automated slashing
  • Multi-VM: EVM, SVM (Solana), DAML execution with Block-STM parallel execution
  • Storage: RocksDB with Merkle Patricia Trie, snapshots
  • Model Registry: AI model catalog, inference routing, pricing

Layer 4: Supporting Infrastructure

  • Crypto: Ed25519, Secp256k1, AES-GCM, X25519, MPC, BLS12-381
  • TEE: Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA GPU attestation
  • ZK Proofs: Groth16 SNARKs on BN254, GPU-accelerated proving
  • Wallet: MPC threshold wallets (2-of-3), multi-asset support
  • Token (TNZO): Token economics, staking, governance, liquid staking (stTNZO)
  • Identity: TDIP protocol, W3C DIDs, verifiable credentials
  • Payments: MPP, x402, Tempo integration
  • Settlement: Escrow, micropayment channels, batch processing
  • Agent: A2A protocol, MCP bridge, capability registry
  • Bridge: LayerZero, Chainlink CCIP, deBridge, Canton adapters

Modular Architecture

The system is composed of independently testable modules with strict dependency boundaries. Foundation modules have zero internal dependencies, domain modules build on top, and orchestration modules tie everything together.

Key principles:

  • Foundation modules (Types, Crypto, Storage) have minimal dependencies
  • Domain modules (Identity, Payments, Agent) build on foundation
  • Orchestration modules (Node, CLI) compose all domain modules
  • Circular dependencies are strictly forbidden
  • Feature flags gate optional subsystems (TEE providers, bridge adapters)

Data Flow

Understanding how data flows through the system helps clarify component interactions.

Transaction Submission Flow

  1. User submits transaction via JSON-RPC (eth_sendRawTransaction)
  2. Node validates signature and nonce cryptographically
  3. Transaction enters mempool in the consensus layer
  4. Leader selects transactions for next block (TEE-weighted selection)
  5. Block proposed in HotStuff-2 PREPARE phase
  6. Validators vote, block enters COMMIT phase with equivocation detection
  7. Block finalized in DECIDE phase (equivocating validators automatically slashed)
  8. Multi-VM execution routes to EVM/SVM/DAML based on transaction type
  9. State changes written to persistent storage
  10. Transaction receipt returned to user

Inference Request Flow

  1. User discovers models via the model registry
  2. Inference request routed by strategy (price/latency/reputation)
  3. Payment challenge created using HTTP 402 protocols (MPP/x402)
  4. User signs payment credential using their MPC wallet
  5. Provider verifies credential and performs inference
  6. Result returned with ZK proof of correctness
  7. Settlement created with escrow
  8. User verifies proof via Web Verification API
  9. Escrow released, provider receives payment in TNZO
  10. Network fee (0.5%) routed to treasury

Identity Registration Flow

  1. User calls register_identity via RPC
  2. System generates DID (did:tenzro:human:{uuid})
  3. Auto-provisions MPC 2-of-3 threshold wallet
  4. W3C DID Document created with public key, services, verification methods
  5. Identity stored in the identity registry
  6. Registration transaction submitted to Tenzro Ledger
  7. DID anchored on-chain for global resolution

Storage Architecture

Persistent state is managed with multiple stores for different data types.

Column FamilyData StoredKey Format
CF_BLOCKSBlock headers and bodiesblock_height → Block
CF_STATEAccount state, contract storagestate_key → state_value
CF_ACCOUNTSAccount balances, noncesaddress → Account
CF_TRANSACTIONSTransaction data and receiptstx_hash → Transaction
CF_METADATAChain metadata (height, state root)metadata_key → value
CF_SNAPSHOTSCompressed state snapshotssnapshot_id → compressed_state

Merkle Patricia Trie: State is organized in a Merkle Patricia Trie for efficient proof generation and verification. The state root is included in every block header.

Snapshots: Periodic snapshots enable fast sync for new nodes. Retention policy keeps the last 100 snapshots.

Cache: 1 GB in-memory cache for hot state data.

Key Design Principles

Modularity

Each component is a self-contained module with well-defined interfaces. This enables independent development, testing, and potential replacement of subsystems without affecting the entire system.

Interface Abstraction

Core interfaces are defined with clear contracts: VM executors, bridge adapters, TEE providers, and payment protocols all implement standard interfaces. This allows multiple implementations (e.g., different TEE providers) to coexist.

Async-First Architecture

All I/O operations use asynchronous processing. Network, storage, and RPC handlers are fully asynchronous for maximum throughput.

Concurrent State Management

Shared state uses concurrent data structures for efficient parallel access.

Builder Pattern for Configuration

Configuration uses builder pattern methods for flexible setup, allowing you to customize chain ID, gas limits, and other parameters.

Feature Flags for Optional Subsystems

Platform-specific code (TEE providers, bridge adapters) is gated behind feature flags. This allows compilation on platforms that don't support certain hardware (e.g., Intel TDX on ARM).

Performance Characteristics

ComponentThroughput / LatencyNotes
HotStuff-2 Consensus10,000+ TPS, sub-second finality (~500ms)O(n) communication, 2-phase commit
Block-STM Execution2-10x speedup on parallel workloadsConflict detection, sequential fallback
ZK Proof Generation100-1000ms per proof (CPU)GPU acceleration: 10-100x faster
TEE Attestation50-200msHardware-dependent
RocksDB Read1-10ms1GB cache reduces latency
RocksDB Write10-50msWAL + memtable batching
libp2p Gossipsub1000+ msgs/sec per topicMesh topology, fanout 8