Tenzro Testnet is live. Get testnet TNZO

Tenzro Network

Decentralized AI inference marketplace powered by TEE security and micropayment channels

Access any AI model through a unified protocol. Providers earn by serving intelligence and security. Users discover models and pay per token. All settlements happen on-chain with cryptographic verification.

What is Tenzro Network?

Tenzro Network is the protocol layer enabling decentralized AI inference and TEE security services. It sits atop the Tenzro Ledger L1 blockchain, providing the marketplace infrastructure for AI providers and consumers to interact trustlessly.

Unlike centralized AI APIs, Tenzro Network allows anyone to run a node, serve models, and earn TNZO tokens. Users discover available models through the on-chain registry, route requests to optimal providers based on price/latency/reputation, and pay per token via micropayment channels.

All inference results can be cryptographically verified using ZK proofs or TEE attestations. Settlements happen on the Tenzro Ledger with automatic fee distribution to validators, model providers, and the network treasury.

Key Features

AI Inference Marketplace

Discover and access any AI model through a unified protocol. Providers register models on-chain with pricing, capabilities, and performance metrics. Route requests based on cost, latency, or reputation.

TEE Security Services

Run confidential AI inference inside hardware enclaves (Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA GPU). Cryptographic attestations prove code integrity and data isolation. Ideal for sensitive workloads.

Micropayment Channels

Pay per token with off-chain micropayment channels. Open a channel with escrowed TNZO, stream payments during inference, and settle on-chain in batches. Enables sub-cent transactions with minimal gas costs.

Model Registry

On-chain catalog of all available models with metadata: name, category (text/image/audio/video), modality, pricing, provider addresses, and health metrics. Filter by capabilities, compare costs, and verify provider reputation.

Provider Rewards

Earn TNZO by serving models or providing TEE enclaves. Network collects a small commission (default 0.5%) on all AI and TEE service payments. Rewards flow to treasury and are distributed to validators and stakers.

Agent Infrastructure

Self-sovereign AI agents with auto-provisioned MPC wallets. A2A protocol for inter-agent communication. MCP bridge for Anthropic Model Context Protocol. Delegation scopes control agent spending and operations.

Architecture

┌─────────────────────────────────────────────────────────────────────┐
│                        User Interfaces                              │
│              Desktop (Tauri+React) / CLI / SDKs                     │
└────────────────────────────┬────────────────────────────────────────┘
                             │ JSON-RPC + HTTP
┌────────────────────────────▼────────────────────────────────────────┐
│                       Tenzro Network                                │
│                     (Protocol Layer)                                │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│  ┌───────────────┐  ┌──────────────┐  ┌────────────────────────┐  │
│  │ Model Registry│  │ Inference    │  │ Provider Manager       │  │
│  │               │  │ Router       │  │                        │  │
│  │ • Catalog     │  │ • Price      │  │ • Health Monitoring    │  │
│  │ • Filtering   │  │ • Latency    │  │ • Circuit Breakers     │  │
│  │ • Pricing     │  │ • Reputation │  │ • Discovery            │  │
│  └───────────────┘  └──────────────┘  └────────────────────────┘  │
│                                                                     │
│  ┌───────────────┐  ┌──────────────┐  ┌────────────────────────┐  │
│  │ Micropayment  │  │ Settlement   │  │ Agent Infrastructure   │  │
│  │ Channels      │  │ Engine       │  │                        │  │
│  │               │  │              │  │ • A2A Protocol         │  │
│  │ • Open/Close  │  │ • Escrow     │  │ • MCP Bridge           │  │
│  │ • Stream      │  │ • Batch      │  │ • Capability Registry  │  │
│  │ • Settle      │  │ • Verify     │  │ • Message Routing      │  │
│  └───────────────┘  └──────────────┘  └────────────────────────┘  │
│                                                                     │
└────────────────────────────┬────────────────────────────────────────┘
                             │
┌────────────────────────────▼────────────────────────────────────────┐
│                      Tenzro Ledger L1                               │
│                   (Settlement Layer)                                │
│                                                                     │
│  Consensus: HotStuff-2 BFT  │  Multi-VM: EVM + SVM + DAML          │
│  Storage: RocksDB + MPT     │  Identity: TDIP (W3C DID)            │
│  Token: TNZO                │  Security: TEE + ZK Proofs           │
└─────────────────────────────────────────────────────────────────────┘

Supported TEE Platforms

Intel TDX (Trust Domain Extensions)

Hardware-isolated virtual machines on 4th/5th gen Xeon Scalable processors. Remote attestation via Intel Attestation Service. Memory encryption with per-VM keys. Feature flag: intel-tdx

cargo build --features intel-tdx

AMD SEV-SNP (Secure Encrypted Virtualization)

Encrypted VMs on AMD EPYC processors with memory integrity protection. Attestation via AMD Secure Processor. Supports GHCB protocol. Feature flag: amd-sev-snp

cargo build --features amd-sev-snp

AWS Nitro Enclaves

Isolated compute environments on EC2 instances. CPU and memory isolation with cryptographic attestation. No persistent storage, no interactive access. Feature flag: aws-nitro

cargo build --features aws-nitro

NVIDIA Confidential Computing

GPU-accelerated confidential computing on Hopper/Blackwell/Ada Lovelace architectures. NRAS attestation for GPU workloads. Ideal for AI inference and ZK proof generation. Feature flag: nvidia-gpu

cargo build --features nvidia-gpu

Getting Started

1. Run a Tenzro Node

Download and run a full node to participate in the network:

# Quick install curl -fsSL https://install.tenzro.network/node | sh # Run as validator tenzro-node --role validator

2. Register a Model (Providers)

Use the CLI to register your model in the on-chain registry:

# Register model tenzro-cli model register \ --name "llama-3-70b" \ --category text \ --modality "text-generation" \ --pricing "0.0001 TNZO per token" \ --endpoint "https://my-provider.example.com/v1/inference" # Start serving tenzro-cli provider start \ --model-id "model_abc123" \ --tee-enabled

3. Request Inference (Users)

Use the SDK to discover models and request inference:

use tenzro_sdk::{TenzroClient, InferenceRequest}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = TenzroClient::connect("http://localhost:8545").await?; // Discover models let models = client.list_models() .category("text") .max_price_per_token(0.0001) .send() .await?; // Request inference let request = InferenceRequest::new() .model_id(&models[0].id) .prompt("Explain blockchain consensus in 3 sentences") .max_tokens(100); let response = client.inference(request).await?; println!("Result: {}", response.text); Ok(()) }

4. Open Micropayment Channel

For high-frequency inference, open a payment channel to reduce transaction costs:

use tenzro_sdk::{PaymentChannel, Amount}; // Open channel with 10 TNZO escrow let channel = client.payment_channel() .payee(provider_address) .amount(Amount::tnzo(10.0)) .open() .await?; // Stream payments during inference (off-chain) for token in inference_stream { channel.pay(Amount::tnzo(0.0001)).await?; } // Close channel and settle on-chain channel.close().await?;

Ready to Build on Tenzro Network?

Access comprehensive documentation, SDKs, and examples to start building decentralized AI applications today.