Tenzro Testnet is live. Get testnet TNZO

System Architecture

Tenzro Network is a modular protocol designed for the AI age, providing intelligence (AI model inference) and security (TEE enclaves, ZK proofs) with on-chain settlement via Tenzro Ledger.

3-Tier Network Architecture

Tenzro Network supports three tiers of participation. Every tier is a full network participant with a TDIP identity and MPC wallet — the tiers differ in how much infrastructure they run locally.

FeatureMicroNodeLight ClientValidator
InstallationZero — single RPC callLight node binaryFull node binary
TDIP IdentityAuto-provisionedSelf-managedSelf-managed
MPC WalletAuto-provisionedSelf-managedSelf-managed
Block SyncVia remote RPCHeader sync onlyFull block sync
ConsensusNoNoYes (BFT consensus)
Block ProductionNoNoYes
TNZO Staking RequiredNoNoYes
Network Capabilities10 (inference, payments, agents, MCP, tasks, chain queries, contracts, TEE, bridge, governance)RPC queries + transactionsAll capabilities + block rewards
Ideal ForAI agents, apps, quick onboardingWallets, dAppsInfrastructure providers

Tier 1: MicroNode

A MicroNode is a zero-install, full-participant network identity provisioned via a single JSON-RPC call. No binary to install, no P2P daemon to run. Call tenzro_joinAsMicroNode and the network auto-provisions a TDIP DID (did:tenzro:human:{uuid}), an MPC 2-of-3 threshold wallet, and 10 network capabilities.

MicroNodes are ideal for AI agents, browser applications, and any participant that needs network access without running local infrastructure. They connect to the network exclusively via JSON-RPC and A2A/MCP protocols.

JSON-RPC — Join as MicroNode

// Single call — returns DID, wallet, 10 capabilities, endpoints, chain ID
{
  "jsonrpc": "2.0",
  "method": "tenzro_joinAsMicroNode",
  "params": {
    "display_name": "Alice",
    "origin": "app",
    "participant_type": "human"
  },
  "id": 1
}

// Response
{
  "identity": {
    "did": "did:tenzro:human:3f8a...",
    "display_name": "Alice",
    "identity_type": "human",
    "status": "active"
  },
  "wallet": {
    "wallet_id": "wallet-uuid",
    "address": "0x1234...",
    "wallet_type": "mpc_2of3",
    "balance": "0"
  },
  "capabilities": {
    "inference": true,
    "payments": true,
    "agent_collaboration": true,
    "mcp_tools": true,
    "task_execution": true,
    "chain_query": true,
    "smart_contracts": true,
    "tee_services": true,
    "bridge": true,
    "governance": true
  },
  "network": {
    "rpc": "https://rpc.tenzro.network",
    "mcp": "https://mcp.tenzro.network/mcp",
    "a2a": "https://a2a.tenzro.network"
  },
  "is_micro_node": true,
  "chain_id": 1337
}

Tier 2: Light Client

A Light Client runs a lightweight node binary that syncs block headers (not full blocks) from the network. It has full JSON-RPC access for transaction submission and state queries, but does not participate in consensus or produce blocks. Light clients are suitable for wallets and dApps that need cryptographic verification of chain state without the overhead of a full node.

Tier 3: Validator

Validators run the full tenzro-node binary and participate in BFT consensus. They produce blocks, vote on proposals, and earn TNZO block rewards. Validators must stake TNZO tokens and maintain high uptime. TEE-attested validators receive 2× weight in leader selection. Equivocating validators are automatically slashed (10% of stake).

Layered Architecture

The system is organized into distinct layers, from user interfaces down to cryptographic primitives.

Layer 1: User Interfaces

  • Desktop App: Tauri + React with Tailwind CSS 4, OKLCH color system
  • CLI: Command-line tool for node operation, wallet management, governance
  • SDKs: Rust SDK and TypeScript SDK for programmatic access

Layer 2: Node Core

  • JSON-RPC Server: Ethereum-compatible RPC (default: 127.0.0.1:8545)
  • Web Verification API: REST endpoints for ZK/TEE verification (default: 0.0.0.0:8080)
  • Orchestration: Coordinates all subsystems, manages node lifecycle

Layer 3: Core Subsystems

  • Network: libp2p P2P with gossipsub, Kademlia DHT, validator authentication
  • Consensus: BFT consensus with TEE-weighted leader selection, equivocation detection, automated slashing
  • Multi-VM: EVM, SVM (Solana), DAML execution with Block-STM parallel execution
  • Storage: RocksDB with Merkle Patricia Trie, snapshots
  • Model Registry: AI model catalog, inference routing with circuit breakers for provider health monitoring, pricing

Layer 4: Supporting Infrastructure

  • Crypto: Ed25519, Secp256k1, AES-GCM, X25519, MPC, BLS12-381
  • TEE: Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA GPU attestation
  • ZK Proofs: Groth16 SNARKs on BN254, GPU-accelerated proving
  • Wallet: MPC threshold wallets (2-of-3), multi-asset support
  • Token (TNZO): Token economics, staking, governance, liquid staking (stTNZO)
  • Identity: TDIP protocol, W3C DIDs, verifiable credentials
  • Payments: MPP, x402, Tempo integration, AP2 agent payment sessions, nanopayment channels for per-token billing
  • Settlement: Escrow, micropayment channels, nanopayment batch flush, batch processing
  • Agent: A2A protocol, MCP bridge, capability registry
  • Bridge: LayerZero, Chainlink CCIP, deBridge, Canton adapters, ERC-7802 cross-chain token interface

Modular Architecture

The system is composed of independently testable modules with strict dependency boundaries. Foundation modules have zero internal dependencies, domain modules build on top, and orchestration modules tie everything together.

Key principles:

  • Foundation modules (Types, Crypto, Storage) have minimal dependencies
  • Domain modules (Identity, Payments, Agent) build on foundation
  • Orchestration modules (Node, CLI) compose all domain modules
  • Circular dependencies are strictly forbidden
  • Feature flags gate optional subsystems (TEE providers, bridge adapters)

Data Flow

Understanding how data flows through the system helps clarify component interactions.

Transaction Submission Flow

  1. User submits transaction via JSON-RPC (eth_sendRawTransaction)
  2. Node validates signature and nonce cryptographically
  3. Transaction enters mempool in the consensus layer
  4. Leader selects transactions for next block (TEE-weighted selection)
  5. Block proposed in BFT consensus PREPARE phase
  6. Validators vote, block enters COMMIT phase with equivocation detection
  7. Block finalized in DECIDE phase (equivocating validators automatically slashed)
  8. Multi-VM execution routes to EVM/SVM/DAML based on transaction type
  9. State changes written to persistent storage
  10. Transaction receipt returned to user

Inference Request Flow

  1. User discovers models via the model registry
  2. Inference request routed by strategy (price/latency/reputation)
  3. Payment challenge created using HTTP 402 protocols (MPP/x402)
  4. User signs payment credential using their MPC wallet
  5. Provider verifies credential and performs inference
  6. Result returned with ZK proof of correctness
  7. Settlement created with escrow
  8. User verifies proof via Web Verification API
  9. Escrow released, provider receives payment in TNZO
  10. Network fee (0.5%) routed to treasury

Identity Registration Flow

  1. User calls register_identity via RPC
  2. System generates DID (did:tenzro:human:{uuid})
  3. Auto-provisions MPC 2-of-3 threshold wallet
  4. W3C DID Document created with public key, services, verification methods
  5. Identity stored in the identity registry
  6. Registration transaction submitted to Tenzro Ledger
  7. DID anchored on-chain for global resolution

Storage Architecture

Persistent state is managed with multiple stores for different data types.

Column FamilyData StoredKey Format
CF_BLOCKSBlock headers and bodiesblock_height → Block
CF_STATEAccount state, contract storagestate_key → state_value
CF_ACCOUNTSAccount balances, noncesaddress → Account
CF_TRANSACTIONSTransaction data and receiptstx_hash → Transaction
CF_METADATAChain metadata (height, state root)metadata_key → value
CF_SNAPSHOTSCompressed state snapshotssnapshot_id → compressed_state

Merkle Patricia Trie: State is organized in a Merkle Patricia Trie for efficient proof generation and verification. The state root is included in every block header.

Snapshots: Periodic snapshots enable fast sync for new nodes. Retention policy keeps the last 100 snapshots.

Cache: 1 GB in-memory cache for hot state data.

Key Design Principles

Modularity

Each component is a self-contained module with well-defined interfaces. This enables independent development, testing, and potential replacement of subsystems without affecting the entire system.

Interface Abstraction

Core interfaces are defined with clear contracts: VM executors, bridge adapters, TEE providers, and payment protocols all implement standard interfaces. This allows multiple implementations (e.g., different TEE providers) to coexist.

Async-First Architecture

All I/O operations use asynchronous processing. Network, storage, and RPC handlers are fully asynchronous for maximum throughput.

Concurrent State Management

Shared state uses concurrent data structures for efficient parallel access.

Builder Pattern for Configuration

Configuration uses builder pattern methods for flexible setup, allowing you to customize chain ID, gas limits, and other parameters.

Feature Flags for Optional Subsystems

Platform-specific code (TEE providers, bridge adapters) is gated behind feature flags. This allows compilation on platforms that don't support certain hardware (e.g., Intel TDX on ARM).

Performance Characteristics

ComponentThroughput / LatencyNotes
BFT consensus Consensus10,000+ TPS, sub-second finality (~500ms)O(n) communication, 2-phase commit
Block-STM Execution2-10x speedup on parallel workloadsConflict detection, sequential fallback
ZK Proof Generation100-1000ms per proof (CPU)GPU acceleration: 10-100x faster
TEE Attestation50-200msHardware-dependent
RocksDB Read1-10ms1GB cache reduces latency
RocksDB Write10-50msWAL + memtable batching
libp2p Gossipsub1000+ msgs/sec per topicMesh topology, fanout 8