Tenzro Testnet is live. Get testnet TNZO
Infrastructure

Decentralized AI Infrastructure

AI inference is concentrated in a few cloud providers and a few regions. A single outage, pricing change, or policy shift can break applications built on top. Tenzro decentralizes AI compute by turning any hardware operator into a provider — with model routing, TEE privacy, and per-token billing built into the protocol.

The Problem

Running AI at scale means depending on a handful of API providers. Rate limits, region restrictions, model deprecations, and pricing increases are unilateral decisions that application developers cannot control. Private data sent to cloud inference endpoints has no hardware-level isolation guarantee.

  • Single-provider dependency creates availability and pricing risk
  • No way to verify that inference ran on specific hardware or with specific model weights
  • Sensitive prompts and responses are processed on shared infrastructure without isolation
  • Edge devices with GPU capacity sit idle while cloud providers charge premium rates
  • Model providers have no standardized way to monetize open-weight models

How Tenzro Solves It

Tenzro creates a global marketplace where anyone can serve AI models and earn TNZO. The InferenceRouter selects providers based on price, latency, or reputation. TEE enclaves ensure prompt privacy. Micropayment channels enable per-token billing. The model registry tracks 41+ models across providers.

Model Registry

41+ models in the registry including Gemma 3, Gemma 4, Qwen 3, Qwen 3.5, Phi 4, Mistral, and more. Providers register models with category and modality metadata. The DownloadManager handles HuggingFace Hub downloads with SHA-256 integrity verification.

Inference Routing

The InferenceRouter supports four strategies: price (cheapest), latency (fastest), reputation (most reliable), and weighted (balanced). Circuit breakers automatically remove failing providers and re-route to healthy ones.

TEE Confidential Inference

Run inference inside Intel TDX, AMD SEV-SNP, AWS Nitro, or NVIDIA GPU CC enclaves. Prompts and responses are encrypted with AES-256-GCM using HKDF-SHA256 derived keys with vendor-specific domain separation. Hardware attestation proves isolation.

Provider Economics

Register as a ModelProvider with tenzro_registerProvider, set pricing schedules, and earn TNZO from inference requests. TEE-attested providers get 2x weight in leader selection. Staking rewards and network incentives on top of direct payments.

Architecture

A user requests inference. The InferenceRouter selects the best provider, the model executes inside a TEE enclave, results are verified with a ZK proof, and payment settles via micropayment channel.

Edge AI & Industrial Applications

Every Tenzro node can serve AI models locally — no cloud required. This enables edge computing scenarios where latency, privacy, or connectivity constraints make centralized APIs impractical.

Manufacturing & Industrial IoT

Quality control agents running vision models on edge nodes. Anomaly detection in real-time without sending factory data to the cloud. Each inspection produces a TEE attestation for regulatory compliance.

Energy & Utilities

Grid monitoring agents analyzing sensor data at the edge. Predictive maintenance models running on local nodes. Settlement for energy trading between producers and consumers via micropayment channels.

Healthcare & Life Sciences

Patient data analysis in TEE enclaves — models run on hospital nodes, data never leaves the premises. Verifiable credentials for medical certifications. ZK proofs for privacy-preserving clinical trials.

Financial Institutions

Banks running AML/fraud detection models on their own infrastructure with TEE attestation proving computation integrity. Real-time transaction screening without exposing customer data to third parties.

Code Example

Serve a model as a provider and request inference as a consumer:

CLI + Rust SDK
# Provider: serve a model
tenzro-cli provider register --role model-provider
tenzro-cli model serve gemma3:27b
tenzro-cli provider pricing set --model gemma3:27b \
    --input-price 0.001 --output-price 0.002

# Consumer: request inference via SDK
use tenzro_sdk::TenzroClient;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let client = TenzroClient::new("https://rpc.tenzro.network");

    // List available models
    let models = client.list_models(None, None).await?;

    // Chat completion with automatic routing
    let response = client.chat_completion(
        "gemma3:27b",
        vec![json!({
            "role": "user",
            "content": "Summarize the latest DeFi trends"
        })],
    ).await?;

    // List model endpoints with status
    let endpoints = client.list_model_endpoints().await?;

    // Verify inference result
    let verified = client.verify_inference_result(
        &response.proof,
        &response.model_hash,
    ).await?;

    Ok(())
}

Relevant Tools & APIs

MCP Tools

list_models
chat_completion
list_model_endpoints
register_provider
get_provider_stats
stake_tokens
verify_zk_proof

RPC Methods

tenzro_listModels
tenzro_chat
tenzro_serveModel
tenzro_stopModel
tenzro_registerProvider
tenzro_providerStats
tenzro_listModelEndpoints

CLI Commands

tenzro-cli model list
tenzro-cli model serve
tenzro-cli model stop
tenzro-cli provider register
tenzro-cli provider pricing
tenzro-cli chat