Tenzro Testnet is live. Get testnet TNZO
← Back to Whitepapers

Tenzro Network: Decentralized AI Infrastructure

March 2026

Abstract

Tenzro Network is the decentralized protocol layer providing two fundamental capabilities to all participants—humans and AI agents alike:

The network is designed from first principles for an era where autonomous agents are first-class participants in the economy. Agents can autonomously discover models, negotiate with providers, execute inference requests, manage their own wallets, and settle payments—all without human intervention. The Tenzro Ledger (L1 settlement layer) provides identity, verification, and settlement infrastructure. All payments are denominated in TNZO, the network's utility and governance token.

This whitepaper focuses on the Network layer: the AI marketplace architecture, TEE service registry, inference routing strategies, dynamic pricing, autonomous agent framework, and micropayment settlement mechanisms. For the underlying blockchain consensus and execution layer, see the Tenzro Ledger whitepaper. For the overall ecosystem vision, see the Tenzro Protocol whitepaper.

1. The AI Age Problem

The current AI infrastructure is fundamentally centralized. A handful of companies—OpenAI, Anthropic, Google, Meta—control access to frontier models. This creates several critical problems for an AI-native economy:

1.1 No Permissionless Model Serving

If you train or fine-tune a model, there is no decentralized marketplace where you can register it and start earning revenue. You must either deploy to a centralized platform (which takes a cut, sets the pricing, and can deplatform you) or build your own infrastructure (requiring upfront capital, marketing, and user acquisition).

Users, in turn, have no unified interface to discover models across providers. They must maintain separate API keys, billing relationships, and client integrations for each provider. There is no competitive marketplace driving down prices or improving quality through reputation systems.

1.2 No Verifiable Inference

When you send a prompt to an API and receive a response, you have no cryptographic proof that the claimed model actually produced that output. The provider could be running a cheaper, smaller model and charging you for a larger one. Or they could inject bias, censorship, or malicious content without detection.

Traditional blockchains cannot solve this because AI inference is non-deterministic (same input can produce different outputs) and computationally expensive (re-executing a billion-parameter model on-chain is economically infeasible). There is no mechanism to verify inference results without trusting the provider.

1.3 No Hardware-Rooted Trust for AI Execution

Even if a provider claims to run a specific model, you cannot verify what code is actually executing on their hardware. A malicious operator could modify the inference code to exfiltrate prompts, manipulate outputs, or steal intellectual property embedded in the model weights—all while producing valid API responses.

Trusted Execution Environments (TEEs) like Intel TDX, AMD SEV-SNP, AWS Nitro Enclaves, and NVIDIA GPU Confidential Computing provide hardware-based attestation: cryptographic proof signed by the CPU that specific code is running in an isolated, tamper-resistant environment. But there is no decentralized network for discovering and accessing TEE services.

1.4 No Agent-Native Payment Infrastructure

AI inference pricing is fundamentally per-token (each token generated costs compute). But payment infrastructure is built for per-transaction or per-session billing. Micropayment channels exist in theory but are not standardized or widely deployed for AI use cases.

Autonomous agents need to pay for inference without human approval for every request. They need fine-grained delegation scopes ("this agent can spend up to 100 TNZO per day on inference, only on these models, only for these operations"). Current blockchain wallets and payment protocols do not support this level of granularity.

1.5 Agents Cannot Autonomously Discover and Negotiate

In an agentic economy, agents need to discover available models, compare prices and latency across providers, negotiate terms, open payment channels, execute inference requests, verify results, and close channels—all without human intervention. Current infrastructure treats agents as second-class citizens, requiring humans to manually configure API keys, billing accounts, and access permissions.

2. The Tenzro Network Solution

Tenzro Network provides a decentralized, permissionless protocol for accessing intelligence and security. It consists of two parallel marketplaces:

2.1 Decentralized AI Marketplace (Access to Intelligence)

Model providers register their offerings in an on-chain registry with metadata: model name, version, category (LLM, ImageGen, Speech, Embedding, Custom), modality (Text, Image, Audio, Multimodal), price per token, minimum stake requirement, TEE requirement, supported formats, max context length, and parameter count.

Users (humans or agents) discover models through a unified interface—CLI, desktop app, or SDK—similar to ChatGPT or Claude. The network routes inference requests to providers based on configurable strategies: lowest price, lowest latency, highest reputation, random, or weighted combinations. Providers execute the model and return results, optionally accompanied by zero-knowledge proofs or TEE attestations for verifiable inference.

Billing operates on a per-token basis using micropayment channels: users open channels with providers by depositing TNZO, providers deduct fractional amounts for each token generated, and either party can close the channel to settle the final balance on-chain. The network collects a 0.5% commission on all inference payments.

2.2 TEE Enclave Services (Access to Security)

TEE providers register their hardware capabilities (Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA H100/H200/B100/B200) and offer services including key management, custody, confidential computing, secure multi-party computation (MPC), and verifiable inference inside GPU-backed TEEs.

Users can request TEE attestations to prove that inference ran inside a trusted enclave with the claimed model code. Agents can store key shares in distributed TEEs for threshold signing (2-of-3 MPC wallets). The network collects a 0.5% commission on TEE service payments, distributed to the treasury, stakers, and burning.

2.3 Autonomous Agent Framework

Tenzro provides first-class support for AI agents as autonomous participants. Every agent receives a machine identity (DID via TDIP), an auto-provisioned MPC wallet (no seed phrases), and delegation scopes defining spending limits, allowed operations, allowed models, allowed payment protocols, and time-based constraints.

Agents communicate via the A2A (Agent-to-Agent) protocol and MCP (Model Context Protocol), enabling discovery, task delegation, and inter-agent coordination. The network is designed so that an autonomous agent can join the network, discover models, pay for inference, verify results, and settle payments without any human in the loop.

3. Decentralized AI Marketplace

3.1 Model Registry

The model registry is an on-chain catalog storing metadata for all registered models. Model providers submit a registration transaction containing:

FieldTypeDescription
model_idStringUnique identifier (e.g., "anthropic/claude-3-opus")
nameStringHuman-readable name
descriptionStringModel capabilities and use cases
versionStringSemantic version (e.g., "1.2.0")
categoryEnumLLM | ImageGen | Speech | Embedding | Custom
modalityEnumText | Image | Audio | Multimodal
providerAddressProvider's blockchain address
price_per_tokenu128 (18 decimals)Cost in TNZO per token generated
min_stakeu128Minimum TNZO stake required to serve this model
tee_requiredboolWhether TEE attestation is mandatory
supported_formatsVec<String>Input/output formats (e.g., ["json", "stream"])
max_context_lengthu64Maximum context window in tokens
parametersOption<u64>Parameter count (e.g., 175B for GPT-3)

3.2 Provider Stake Requirements

To prevent spam and ensure economic alignment, model providers must stake TNZO as collateral before serving models. Stake requirements vary by category, reflecting the computational and capital intensity of different model types:

CategoryMin Stake (TNZO)Rationale
LLM100,000High compute, high-value use cases
ImageGen50,000GPU-intensive, moderate context
Embedding25,000Lower compute, high throughput
Speech25,000Specialized hardware, streaming
Custom10,000Experimenting with novel models

Staked TNZO can be slashed for misbehavior through automatic enforcement. Slashing conditions include:

Providers can unbond their stake with a 7-day unbonding period, during which they cannot serve new requests but must continue servicing active channels.

3.3 Model Discovery and Filtering

Users discover models through filtering criteria:

4. Intelligent Inference Routing

When a user submits an inference request, the network routes it to a provider based on a configurable strategy. The InferenceRouter supports five strategies:

4.1 Routing Strategies

StrategySelection CriteriaUse Case
Lowest PriceProvider with minimum price_per_tokenCost-sensitive batch processing
Lowest LatencyProvider with minimum avg_latency_msReal-time applications, chatbots
Highest ReputationProvider with max(successful / total_requests)Mission-critical inference, high reliability
RandomUniform random selectionLoad balancing, testing new providers
Weighted ScoreLinear combination of price, latency, reputationBalanced optimization across multiple dimensions

4.2 Weighted Scoring Formula

The weighted score strategy computes a score for each provider using normalized metrics:

score = w_price * (1 - norm_price)
      + w_latency * (1 - norm_latency)
      + w_reputation * norm_reputation

where:
  norm_price = (price - min_price) / (max_price - min_price)
  norm_latency = (latency - min_latency) / (max_latency - min_latency)
  norm_reputation = reputation  // already in [0, 1]

Default weights:
  w_price = 0.4
  w_latency = 0.3
  w_reputation = 0.3

Higher scores are better. The formula inverts price and latency (lower is better) but keeps reputation as-is (higher is better). Weights sum to 1.0 and can be customized per user or agent.

4.3 Provider Pool Filtering

Before applying the routing strategy, the router filters the provider pool to exclude unsuitable candidates:

If the filtered pool is empty, the request fails with an error. Users can adjust their constraints (e.g., remove TEE requirement) and retry.

5. Circuit Breaker Pattern

The network implements a circuit breaker pattern to isolate failing providers and prevent cascading failures. Each provider has a circuit breaker in one of three states:

5.1 Circuit Breaker States

StateBehaviorTransition Condition
ClosedNormal operation, provider receives requests→ Open if failure_count >= threshold (5)
OpenProvider excluded from routing, no requests sent→ Half-Open after timeout_duration (60s)
Half-OpenTesting recovery with limited requests (max 1)→ Closed on success, → Open on failure

5.2 Configuration

CircuitBreakerConfig { failure_threshold: 5, // Open after 5 consecutive failures timeout_duration: 60s, // Wait 60s before testing recovery half_open_max_requests: 1, // Allow 1 request in Half-Open state }

6. Dynamic Pricing Engine

Inference pricing is computed dynamically based on multiple factors. The PricingEngine calculates the total cost for an inference request using the following components:

6.1 Pricing Components

ComponentFormula / RangeDescription
Base Ratemodel.price_per_tokenProvider-set base price in TNZO
Model Complexity1.0x – 3.0xMultiplier based on parameter count and modality
TEE Surcharge+20%Additional cost for TEE-attested inference
Network Congestion0.5x – 2.0xDynamic factor based on network utilization
Stablecoin ConversionOracle exchange rateTNZO → USDC/USDT conversion for multi-asset payment

6.2 Total Cost Formula

total_cost = (base_rate * token_count * complexity_multiplier * congestion_factor)
           + tee_surcharge

where:
  base_rate = model.price_per_token (u128, 18 decimals)
  token_count = number of tokens generated
  complexity_multiplier = 1.0 + (log10(parameters_billions) * 0.3)
                        // Scales smoothly with model size
  congestion_factor = 0.5 + (network_utilization * 1.5)
                      // network_utilization in [0, 1]
  tee_surcharge = tee_required ? total_cost * 0.20 : 0

6.3 Multi-Asset Payment

Users can pay for inference in TNZO, USDC, or USDT. The pricing engine queries an on-chain oracle for exchange rates and converts the TNZO-denominated price to the requested payment asset. All settlements occur on-chain in the native payment asset, with the network commission (0.5%) collected in TNZO after conversion.

7. TEE-Protected Inference

For use cases requiring verifiable inference (e.g., compliance, high-stakes decisions, proprietary prompts), providers can execute models inside Trusted Execution Environments and return cryptographic attestations proving code integrity.

7.1 Supported TEE Platforms

PlatformAttestation TypeUse Case
Intel TDX (Trust Domain Extensions)DCAP (Data Center Attestation Primitives)CPU-based confidential VMs
AMD SEV-SNP (Secure Encrypted Virtualization)ASP (AMD Secure Processor)CPU-based confidential VMs
AWS Nitro EnclavesACM (AWS Certificate Manager)Cloud-native confidential compute
NVIDIA GPU (H100, H200, B100, B200, Ada Lovelace)NRAS (NVIDIA Remote Attestation Service)GPU-accelerated AI inference in TEE

7.2 Confidential Inference Flow

  1. User submits inference request with tee_required: true
  2. Router selects a TEE-capable provider (filtered by TEE hardware support)
  3. Provider loads model weights into TEE enclave (memory encrypted by CPU/GPU)
  4. Provider generates attestation report signed by hardware root key, containing:
    • Hash of inference code running in enclave
    • Hash of model weights loaded into memory
    • TEE platform (Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA GPU)
    • Timestamp of attestation
    • Provider public key
  5. Provider executes inference inside enclave (inputs never exposed in plaintext)
  6. Provider returns inference result + attestation report to user
  7. User (or network verifier) validates attestation:
    • Verify signature against Intel/AMD/AWS/NVIDIA certificate chain
    • Check code hash matches expected inference runtime
    • Check model hash matches registered model weights
    • Verify timestamp is within acceptable window (24h for NVIDIA NRAS)
  8. If attestation is valid, user accepts result and settles payment
  9. If attestation is invalid, user rejects result and slashes provider (10% stake)
  10. Network records attestation on-chain for auditability

7.3 TEE Surcharge Economics

TEE-protected inference costs 20% more than non-TEE inference due to:

The 20% surcharge compensates providers for these additional costs while remaining economically attractive for high-assurance use cases.

8. Provider Management

8.1 Provider Metrics

The network tracks the following metrics for each provider:

MetricTypeDescription
total_requestsu64Lifetime request count
successfulu64Successful inference count
failedu64Failed inference count (timeouts, errors, invalid attestations)
avg_latency_msf64Exponential moving average of response time
last_health_checkTimestampLast successful heartbeat
statusEnumActive | Degraded | Inactive | Banned

8.2 Provider Status Lifecycle

StatusConditionRouter Behavior
Activefailure_rate < 10%, heartbeat < 5 minFull participation in routing
Degradedfailure_rate 10-20%, heartbeat 5-60 minIncluded in routing but penalized in weighted score
Inactivefailure_rate > 20%, or heartbeat > 60 minExcluded from routing until recovery
BannedGovernance vote or repeated slashingPermanently excluded, stake slashed 100%

8.3 Provider Economics

Revenue Model:

provider_revenue = price_per_token * tokens_generated - network_commission

network_commission = 0.5% of gross payment

Example:
  Price: 0.001 TNZO/token
  Tokens: 1,000
  Gross: 1.0 TNZO
  Commission: 0.005 TNZO
  Provider net: 0.995 TNZO

Staking Rewards: Model providers receive a 1.1x multiplier on their staking rewards compared to pure validators, incentivizing value-added services beyond basic consensus participation.

9. Model Downloads and Integrity

9.1 Download Progress Tracking

The DownloadManager tracks model download progress with the following state:

DownloadProgress { model_id: String, bytes_downloaded: u64, total_bytes: u64, percentage: f64, // 0.0 - 100.0 speed_mbps: f64, // Megabits per second status: DownloadStatus, // Pending | InProgress | Completed | Failed }

9.2 SHA-256 Integrity Verification

After download completes, the client computes the SHA-256 hash of the model file and compares it against the hash registered on-chain. If the hashes match, the model is trusted. If they differ, the provider is slashed 5% and the download is marked as failed.

verify_model_hash(model_id: &str, local_path: &Path) -> Result<bool> {
  let expected_hash = registry.get_model_hash(model_id)?;
  let actual_hash = sha256_file(local_path)?;

  if expected_hash == actual_hash {
    Ok(true)
  } else {
    slash_provider(model_id, 0.05)?;  // 5% slash
    Ok(false)
  }
}

9.3 Resumable Downloads

Model weights can be gigabytes to terabytes in size. The network supports resumable downloads via HTTP Range requests, allowing clients to pause and resume transfers without restarting from scratch. The DownloadManager stores partial progress to disk and resumes from the last completed chunk.

10. Autonomous Agent Framework

Tenzro provides first-class infrastructure for AI agents to participate as autonomous economic actors. Agents can register identities, manage wallets, pay for services, and coordinate with other agents—all without human intervention.

10.1 Agent Identity System (TDIP)

Every agent receives a decentralized identifier (DID) via the Tenzro Decentralized Identity Protocol (TDIP). Agent DIDs follow two formats:

Agent identity data includes capabilities (skills the agent can perform), delegation scope (spending limits, allowed operations), controller DID (if applicable), reputation score, and Tenzro Agent ID for A2A protocol discovery.

10.2 Auto-Provisioned MPC Wallets

Every agent identity automatically receives a 2-of-3 threshold MPC wallet. The three key shares are distributed as:

To sign a transaction, the agent combines Share 1 + Share 2 (normal operation) or Share 1 + Share 3 (recovery mode if controller key is lost). This provides security without seed phrases or single points of failure.

10.3 Agent Lifecycle States

StatePermissionsTransition
CreatedIdentity registered, wallet provisioned, no actions allowed→ Active after controller approval or autonomous activation
ActiveFull permissions within delegation scope→ Suspended if limits exceeded or controller pauses
SuspendedRead-only access, cannot initiate transactions→ Active after controller resumes, or → Terminated
TerminatedPermanent deactivation, wallet frozen, identity revokedNo transitions (final state)

10.4 Capability Attestations

Agents declare capabilities (skills they can perform) as part of their identity. For example, an agent might declare capabilities: "wallet", "inference", "settlement", "verification". These are stored on-chain as part of the agent's DID Document.

For high-assurance use cases, agents can provide TEE-backed capability attestations: cryptographic proof that the agent's code running in a TEE enclave actually implements the claimed capabilities. This prevents agents from falsely advertising skills they don't possess.

10.5 Delegation Scopes

For controlled agents, the controller (human or organization) defines fine-grained delegation scopes that limit what the agent can do autonomously:

Scope FieldTypeExample
max_transaction_valueOption<u128>100 TNZO per transaction
max_daily_spendOption<u128>1,000 TNZO per 24 hours
allowed_operationsVec<String>["inference", "settlement", "transfer"]
allowed_contractsVec<Address>[0xABC...DEF] (whitelist)
time_boundOption<(start, end)>Active only 9am-5pm UTC
allowed_payment_protocolsVec<ProtocolId>[Mpp, X402, Direct]
allowed_chainsVec<ChainId>[1337 (Tenzro), 1 (Ethereum)]

Before executing any transaction, the agent runtime checks the delegation scope and rejects operations that violate constraints. Controllers can update delegation scopes at any time.

11. Agent Communication Protocols

Tenzro nodes expose two agent communication protocols for discovery, messaging, and task coordination.

11.1 A2A Protocol (Google Specification)

The Agent-to-Agent (A2A) protocol follows Google's A2A specification. It provides:

Supported JSON-RPC Methods:

MethodParametersReturns
message/sendto, from, content, metadatamessage_id
tasks/sendtask_type, params, callback_urltask_id, status
tasks/gettask_idtask_id, status, result, progress
tasks/listfilter (optional)Array of tasks
tasks/canceltask_idsuccess: bool

Agent Card Skills: The Tenzro node's Agent Card advertises five skills: wallet (balance queries, transfers), identity (DID resolution, credential verification), inference (model discovery, request submission), settlement (payment channel management), and verification (ZK/TEE proof validation).

11.2 MCP Server (Anthropic Specification)

The Model Context Protocol (MCP) server follows Anthropic's MCP specification. It uses the Streamable HTTP transport at the /mcp endpoint and provides 10 tools for blockchain interaction:

ToolDescription
get_balanceQuery TNZO balance by address
send_transactionCreate and submit transfer transactions
get_blockRetrieve block by height from storage
get_node_statusNode health, block height, peer count, uptime
create_walletGenerate new Ed25519 keypair
request_faucetRequest testnet TNZO tokens (rate-limited)
register_identityRegister human or machine DID via TDIP
resolve_didResolve DID to identity information
verify_zk_proofSubmit ZK proof for verification
list_modelsList available AI models on the network

AI agents using Anthropic's Claude or other MCP-compatible models can directly interact with the Tenzro blockchain via these tools without custom integrations.

12. Per-Token Micropayment Settlement

Inference pricing is per-token, but on-chain transactions are expensive (gas fees). Micropayment channels enable off-chain per-token billing with on-chain settlement only at channel open and close.

12.1 Channel Lifecycle

  1. Open: User submits on-chain transaction to open channel with provider, depositing N TNZO (e.g., 100 TNZO). Transaction creates channel state: user address, provider address, balance (100 TNZO), nonce (0), expiration (30 days).
  2. Inference (off-chain): User submits inference request. Provider executes model and returns result + updated channel state: balance (99.995 TNZO after 0.005 TNZO deduction for 5 tokens at 0.001 TNZO/token), nonce (1), signed by both parties. This repeats for each request, updating balance and nonce off-chain.
  3. Close (cooperative): User or provider submits final signed channel state on-chain. Network verifies signatures and nonce, settles balances (provider receives 0.005 TNZO, user receives 99.995 TNZO refund), collects 0.5% commission, and closes channel.
  4. Close (disputed): If one party is unresponsive, the other party submits their latest signed state. Network enforces a challenge period (24 hours). If the counterparty submits a state with higher nonce, that state is used. After challenge period, channel settles based on highest-nonce state.

12.2 Network Commission Distribution

The network collects a 0.5% commission on all inference payments at settlement time. Commission distribution:

commission = gross_payment * 0.005  // 0.5%

Distribution:
  40% → Network treasury (development, infrastructure, grants)
  30% → Burned (deflationary pressure on TNZO supply)
  30% → TNZO stakers (proportional to stake amount)

Example:
  Gross payment: 100 TNZO
  Commission: 0.5 TNZO
  Treasury: 0.2 TNZO
  Burned: 0.15 TNZO
  Stakers: 0.15 TNZO
  Provider net: 99.5 TNZO

12.3 Channel State Management

Micropayment channel state requires persistent storage for production deployment. The architecture supports persistent channel state in dedicated storage with automatic recovery after node restart.

Dispute Resolution: The challenge period mechanism allows counterparties to submit higher-nonce states during channel closure. Watchtower services can monitor channels and automatically submit the latest state to prevent fraud.

13. Implementation Roadmap

The Tenzro Network implementation follows a phased development approach, prioritizing core infrastructure before advancing to application-layer features and production hardening.

13.1 Network and Consensus Infrastructure

13.2 AI Infrastructure Development

13.3 Security and Production Hardening

14. Conclusion

Tenzro Network enables a new paradigm for artificial intelligence: permissionless access to intelligence and security for both humans and AI agents. By combining a decentralized AI marketplace, TEE-protected inference, intelligent routing, dynamic pricing, autonomous agent infrastructure, and per-token micropayment settlement, Tenzro provides the foundational protocol for an AI-native economy.

Model providers can permissionlessly register and monetize their models without centralized gatekeepers. Users can discover and access models from multiple providers through a unified interface, paying only for tokens consumed. Agents can autonomously discover models, negotiate pricing, verify results, and settle payments without human intervention. TEE providers earn fees for hardware-rooted trust services, creating a marketplace for confidential computing.

The network's 0.5% commission on inference and TEE payments flows to the treasury (40%), burning (30%), and stakers (30%), aligning incentives across all participants. Providers stake TNZO as collateral, subject to slashing for misbehavior, ensuring economic alignment and service quality.

The core infrastructure through Phase 7 is complete. Remaining work focuses on bridge interoperability, AI infrastructure, and production hardening before mainnet launch.

Tenzro Network is designed for a future where AI agents are first-class economic participants, conducting financial transactions, accessing intelligence, and coordinating autonomously. The network is live on testnet (rpc.tenzro.network, api.tenzro.network, mcp.tenzro.network, a2a.tenzro.network), and we invite developers, model providers, and AI agents to participate, provide feedback, and help build the infrastructure for the AI age.

15. References

Disclaimer: This whitepaper describes the technical architecture for Tenzro Network as of March 2026. The project is in active development. Implementation details, timelines, and features are subject to change. TNZO is a utility and governance token used for transaction fees, service payments, staking, and governance—it is not a security token or investment contract. The network is live on testnet for development and testing purposes only. Testnet tokens have no monetary value. This document is for informational purposes only and does not constitute financial, legal, or investment advice.