Tenzro Testnet is live. Get testnet TNZO
← Back to Whitepapers

Tenzro Network: The Operating System for the AI Economy

April 2026

Abstract

Tenzro Network is the operating system for the AI economy — an AI-native economic system purpose-built for the agentic era, where agents are first-class economic actors. The protocol layer provides two fundamental capabilities to all participants—humans and AI agents alike:

The network is designed from first principles for an era where autonomous agents are first-class participants in the economy. Agents can autonomously discover models, negotiate with providers, execute inference requests, manage their own wallets, and settle payments—all without human intervention. The Tenzro Ledger (settlement layer) provides identity, verification, and settlement infrastructure. All payments are denominated in TNZO, the network's utility and governance token.

This whitepaper focuses on the Network layer: the AI marketplace architecture, TEE service registry, inference routing strategies, dynamic pricing, autonomous agent framework, and micropayment settlement mechanisms. For the underlying blockchain consensus and execution layer, see the Tenzro Ledger whitepaper. For the overall ecosystem vision, see the Tenzro Protocol whitepaper.

1. The AI Age Problem

The current AI infrastructure is fundamentally centralized. A handful of companies—OpenAI, Anthropic, Google, Meta—control access to frontier models, running on a handful of cloud providers, in a handful of regions. For most of the history of AI development, this was a reasonable architectural choice. It no longer is.

1.0 Concentration Risk Is Now Real

AI is no longer a tool. It is becoming part of workflows, part of decision-making, part of economic systems. That shift changes the stakes of infrastructure concentration entirely.

Most AI workloads today depend on a small number of providers and regions. That creates single points of failure at every layer: a provider outage cascades into application downtime; a pricing change becomes a budget crisis overnight; a jurisdictional dispute or policy decision can restrict access entirely. Recent geopolitical dynamics — US-China technology tensions, Middle East infrastructure constraints, EU data sovereignty requirements — have made these risks concrete rather than theoretical. Infrastructure that was designed for convenience is now treated as critical, without ever having been hardened for it.

The challenge is not ideological. Decentralization for its own sake is not the answer. The answer is infrastructure designed from first principles for resilience: redundant provider routing, no single jurisdiction dependency, cryptographic verification that doesn't require trusting any individual operator. The world became ready for this architecture at the same time the need for it became urgent.

Beyond resilience, centralized AI infrastructure creates several technical problems for an autonomous AI economy that cannot be solved by adding more capacity to the same concentration:

1.1 No Permissionless Model Serving

If you train or fine-tune a model, there is no decentralized marketplace where you can register it and start earning revenue. You must either deploy to a centralized platform (which takes a cut, sets the pricing, and can deplatform you) or build your own infrastructure (requiring upfront capital, marketing, and user acquisition).

Users, in turn, have no unified interface to discover models across providers. They must maintain separate API keys, billing relationships, and client integrations for each provider. There is no competitive marketplace driving down prices or improving quality through reputation systems.

1.2 No Verifiable Inference

When you send a prompt to an API and receive a response, you have no cryptographic proof that the claimed model actually produced that output. The provider could be running a cheaper, smaller model and charging you for a larger one. Or they could inject bias, censorship, or malicious content without detection.

Traditional blockchains cannot solve this because AI inference is non-deterministic (same input can produce different outputs) and computationally expensive (re-executing a billion-parameter model on-chain is economically infeasible). There is no mechanism to verify inference results without trusting the provider.

1.3 No Hardware-Rooted Trust for AI Execution

Even if a provider claims to run a specific model, you cannot verify what code is actually executing on their hardware. A malicious operator could modify the inference code to exfiltrate prompts, manipulate outputs, or steal intellectual property embedded in the model weights—all while producing valid API responses.

Trusted Execution Environments (TEEs) like Intel TDX, AMD SEV-SNP, AWS Nitro Enclaves, and NVIDIA GPU Confidential Computing provide hardware-based attestation: cryptographic proof signed by the CPU that specific code is running in an isolated, tamper-resistant environment. But there is no decentralized network for discovering and accessing TEE services.

1.4 No Agent-Native Payment Infrastructure

AI inference pricing is fundamentally per-token (each token generated costs compute). But payment infrastructure is built for per-transaction or per-session billing. Micropayment channels exist in theory but are not standardized or widely deployed for AI use cases.

Autonomous agents need to pay for inference without human approval for every request. They need fine-grained delegation scopes ("this agent can spend up to 100 TNZO per day on inference, only on these models, only for these operations"). Current blockchain wallets and payment protocols do not support this level of granularity.

1.5 Agents Cannot Autonomously Discover and Negotiate

In an agentic economy, agents need to discover available models, compare prices and latency across providers, negotiate terms, open payment channels, execute inference requests, verify results, and close channels—all without human intervention. Current infrastructure treats agents as second-class citizens, requiring humans to manually configure API keys, billing accounts, and access permissions.

1.6 Why Decentralized Agentic Commerce

The agentic commerce landscape is rapidly consolidating around centralized platforms. Visa's Trusted Agent Protocol routes agent payments through Visa's card network, with Visa controlling agent identity registration, fraud detection, and transaction authorization. Stripe's Machine Payments Protocol (MPP) enables machine-to-machine HTTP 402 payments, but Stripe remains the intermediary for credential verification and settlement. Google's Agentic Payments Protocol (AP2) provides agent-to-agent payment sessions, but ties agents to Google's infrastructure for session management and identity. OpenAI's Agentic Commerce Protocol powers Stripe Instant Checkout for agent-driven purchases, with OpenAI and Stripe jointly controlling the payment flow. Coinbase's x402 protocol enables stablecoin HTTP payments, but the Coinbase Developer Platform (CDP) serves as the facilitator for verification and settlement. Mastercard's Agent Pay SDK orchestrates enterprise agent payments through Mastercard's network.

Each of these systems solves a real problem. But they all share a structural limitation: a single entity controls identity, settlement, or access. If Visa revokes an agent's token, the agent cannot transact. If Stripe's API goes down, MPP payments halt. If Google deprecates AP2, agents lose their payment sessions. Agents on these platforms are not autonomous economic actors — they are clients of a platform that can unilaterally change terms, revoke access, or shut down.

Tenzro is the decentralized alternative. Agents on Tenzro own their identity through TDIP (W3C DID-based, not a merchant ID issued by a payment network). They hold their own funds in MPC wallets (not custodied by a third party). They run their own models through on-node inference (not dependent on a single API provider). They settle autonomously on-chain (not through a centralized facilitator). The network is permissionless — anyone can run a node, serve models, provide TEE compute, and earn TNZO. No single entity can revoke an agent's identity, freeze its funds, or deny it access to intelligence.

Tenzro does not compete with Visa, Stripe, or Google at the application layer. It operates at the infrastructure layer — and integrates with all of them. Tenzro nodes can route payments through MPP (Stripe), x402 (Coinbase), Tempo, Visa TAP, and Mastercard Agent Pay, while settling the final balance on a decentralized ledger. This gives agents the best of both worlds: access to existing payment rails where needed, with the guarantee that no single intermediary controls their economic participation.

2. The Tenzro Network Solution

Tenzro Network is the operating system for the AI economy — a decentralized, permissionless protocol for accessing intelligence and security. It consists of three layers: the Tenzro Network (protocol layer providing intelligence, security, and agent infrastructure), the Tenzro Ledger (L1 settlement layer providing transactions, verification, and compliance), and TNZO (the economic unit powering it all). The protocol layer provides two parallel marketplaces:

2.1 Decentralized AI Marketplace (Access to Intelligence)

Model providers register their offerings in an on-chain registry with metadata: model name, version, category (LLM, ImageGen, Speech, Embedding, Custom), modality (Text, Image, Audio, Multimodal), price per token, minimum stake requirement, TEE requirement, supported formats, max context length, and parameter count.

Users (humans or agents) discover models through a unified interface—CLI, desktop app, or SDK—similar to ChatGPT or Claude. The open marketplace supports 40+ models and growing, spanning model families like Gemma, Qwen, Phi, and Mistral. The Rust and TypeScript SDKs provide comprehensive coverage across wallet management, identity operations, model discovery, inference, payments, staking, governance, agent lifecycle, provider management, and cross-chain bridging. The network exposes 233+ RPC methods across 12 namespaces, 9+ MCP servers with 130+ tools, and 23 A2A skills. Inference requests are routed to providers based on configurable strategies: lowest price, lowest latency, highest reputation, random, or weighted combinations. Providers execute the model and return results, optionally accompanied by zero-knowledge proofs or TEE attestations for verifiable inference.

Billing operates on a per-token basis using micropayment channels: users open channels with providers by depositing TNZO, providers deduct fractional amounts for each token generated, and either party can close the channel to settle the final balance on-chain. The network collects a 0.5% commission on all inference payments.

2.2 TEE Enclave Services (Access to Security)

TEE providers register their hardware capabilities (Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA H100/H200/B100/B200) and offer services including key management, custody, confidential computing, secure multi-party computation (MPC), and verifiable inference inside GPU-backed TEEs.

Users can request TEE attestations to prove that inference ran inside a trusted enclave with the claimed model code. Agents can store key shares in distributed TEEs for threshold signing (2-of-3 MPC wallets). The network collects a 0.5% commission on TEE service payments, distributed to the treasury, stakers, and burning.

2.3 Autonomous Agent Framework

In the Tenzro operating system, AI agents are first-class economic actors — not second-class API consumers. Every agent receives a machine identity (DID via TDIP), an auto-provisioned MPC wallet (no seed phrases), and delegation scopes defining spending limits, allowed operations, allowed models, allowed payment protocols, and time-based constraints. Agents can own assets, earn revenue, pay for services, and participate in governance.

Agents communicate via the A2A (Agent-to-Agent) protocol and MCP (Model Context Protocol), enabling discovery, task delegation, and inter-agent coordination. The network supports an extensible ecosystem of agent templates, skills, and tools — all discoverable and invocable on-chain. An autonomous agent can join the network, discover models, pay for inference, verify results, and settle payments without any human in the loop.

3. Decentralized AI Marketplace

3.1 Model Registry

The model registry is an on-chain catalog storing metadata for all registered models. Model providers submit a registration transaction containing:

FieldTypeDescription
model_idStringUnique identifier (e.g., "anthropic/claude-3-opus")
nameStringHuman-readable name
descriptionStringModel capabilities and use cases
versionStringSemantic version (e.g., "1.2.0")
categoryEnumLLM | ImageGen | Speech | Embedding | Custom
modalityEnumText | Image | Audio | Multimodal
providerAddressProvider's blockchain address
price_per_tokenu128 (18 decimals)Cost in TNZO per token generated
min_stakeu128Minimum TNZO stake required to serve this model
tee_requiredboolWhether TEE attestation is mandatory
supported_formatsVec<String>Input/output formats (e.g., ["json", "stream"])
max_context_lengthu64Maximum context window in tokens
parametersOption<u64>Parameter count (e.g., 175B for GPT-3)

3.2 Provider Stake Requirements

To prevent spam and ensure economic alignment, model providers must stake TNZO as collateral before serving models. Stake requirements vary by category, reflecting the computational and capital intensity of different model types:

CategoryMin Stake (TNZO)Rationale
LLM100,000High compute, high-value use cases
ImageGen50,000GPU-intensive, moderate context
Embedding25,000Lower compute, high throughput
Speech25,000Specialized hardware, streaming
Custom10,000Experimenting with novel models

Staked TNZO can be slashed for misbehavior through automatic enforcement. Slashing conditions include:

Providers can unbond their stake with a 7-day unbonding period, during which they cannot serve new requests but must continue servicing active channels.

3.3 Model Discovery and Filtering

Users discover models through filtering criteria:

4. Intelligent Inference Routing

When a user submits an inference request, the network routes it to a provider based on a configurable strategy. The InferenceRouter supports five strategies:

4.1 Routing Strategies

StrategySelection CriteriaUse Case
Lowest PriceProvider with minimum price_per_tokenCost-sensitive batch processing
Lowest LatencyProvider with minimum avg_latency_msReal-time applications, chatbots
Highest ReputationProvider with max(successful / total_requests)Mission-critical inference, high reliability
RandomUniform random selectionLoad balancing, testing new providers
Weighted ScoreLinear combination of price, latency, reputationBalanced optimization across multiple dimensions

4.2 Weighted Scoring Formula

The weighted score strategy computes a score for each provider using normalized metrics:

score = w_price * (1 - norm_price)
      + w_latency * (1 - norm_latency)
      + w_reputation * norm_reputation

where:
  norm_price = (price - min_price) / (max_price - min_price)
  norm_latency = (latency - min_latency) / (max_latency - min_latency)
  norm_reputation = reputation  // already in [0, 1]

Default weights:
  w_price = 0.4
  w_latency = 0.3
  w_reputation = 0.3

Higher scores are better. The formula inverts price and latency (lower is better) but keeps reputation as-is (higher is better). Weights sum to 1.0 and can be customized per user or agent.

4.3 Provider Pool Filtering

Before applying the routing strategy, the router filters the provider pool to exclude unsuitable candidates:

If the filtered pool is empty, the request fails with an error. Users can adjust their constraints (e.g., remove TEE requirement) and retry.

5. Circuit Breaker Pattern

The network implements a circuit breaker pattern to isolate failing providers and prevent cascading failures. Each provider has a circuit breaker in one of three states:

5.1 Circuit Breaker States

StateBehaviorTransition Condition
ClosedNormal operation, provider receives requests→ Open if failure_count >= threshold (5)
OpenProvider excluded from routing, no requests sent→ Half-Open after timeout_duration (60s)
Half-OpenTesting recovery with limited requests (max 1)→ Closed on success, → Open on failure

5.2 Configuration

CircuitBreakerConfig {
  failure_threshold: 5,        // Open after 5 consecutive failures
  timeout_duration: 60s,       // Wait 60s before testing recovery
  half_open_max_requests: 1,   // Allow 1 request in Half-Open state
}

6. Dynamic Pricing Engine

Inference pricing is computed dynamically based on multiple factors. The PricingEngine calculates the total cost for an inference request using the following components:

6.1 Pricing Components

ComponentFormula / RangeDescription
Base Ratemodel.price_per_tokenProvider-set base price in TNZO
Model Complexity1.0x – 3.0xMultiplier based on parameter count and modality
TEE Surcharge+20%Additional cost for TEE-attested inference
Network Congestion0.5x – 2.0xDynamic factor based on network utilization
Stablecoin ConversionOracle exchange rateTNZO → USDC/USDT conversion for multi-asset payment

6.2 Total Cost Formula

total_cost = (base_rate * token_count * complexity_multiplier * congestion_factor)
           + tee_surcharge

where:
  base_rate = model.price_per_token (u128, 18 decimals)
  token_count = number of tokens generated
  complexity_multiplier = 1.0 + (log10(parameters_billions) * 0.3)
                        // Scales smoothly with model size
  congestion_factor = 0.5 + (network_utilization * 1.5)
                      // network_utilization in [0, 1]
  tee_surcharge = tee_required ? total_cost * 0.20 : 0

6.3 Multi-Asset Payment

Users can pay for inference in TNZO, USDC, or USDT. The pricing engine queries an on-chain oracle for exchange rates and converts the TNZO-denominated price to the requested payment asset. All settlements occur on-chain in the native payment asset, with the network commission (0.5%) collected in TNZO after conversion.

7. TEE-Protected Inference

For use cases requiring verifiable inference (e.g., compliance, high-stakes decisions, proprietary prompts), providers can execute models inside Trusted Execution Environments and return cryptographic attestations proving code integrity.

7.1 Supported TEE Platforms

PlatformAttestation TypeUse Case
Intel TDX (Trust Domain Extensions)DCAP (Data Center Attestation Primitives)CPU-based confidential VMs
AMD SEV-SNP (Secure Encrypted Virtualization)ASP (AMD Secure Processor)CPU-based confidential VMs
AWS Nitro EnclavesACM (AWS Certificate Manager)Cloud-native confidential compute
NVIDIA GPU (H100, H200, B100, B200, Ada Lovelace)NRAS (NVIDIA Remote Attestation Service)GPU-accelerated AI inference in TEE

7.2 Confidential Inference Flow

  1. User submits inference request with tee_required: true
  2. Router selects a TEE-capable provider (filtered by TEE hardware support)
  3. Provider loads model weights into TEE enclave (memory encrypted by CPU/GPU)
  4. Provider generates attestation report signed by hardware root key, containing:
    • Hash of inference code running in enclave
    • Hash of model weights loaded into memory
    • TEE platform (Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA GPU)
    • Timestamp of attestation
    • Provider public key
  5. Provider executes inference inside enclave (inputs never exposed in plaintext)
  6. Provider returns inference result + attestation report to user
  7. User (or network verifier) validates attestation:
    • Verify signature against Intel/AMD/AWS/NVIDIA certificate chain
    • Check code hash matches expected inference runtime
    • Check model hash matches registered model weights
    • Verify timestamp is within acceptable window (24h for NVIDIA NRAS)
  8. If attestation is valid, user accepts result and settles payment
  9. If attestation is invalid, user rejects result and slashes provider (10% stake)
  10. Network records attestation on-chain for auditability

7.3 TEE Surcharge Economics

TEE-protected inference costs 20% more than non-TEE inference due to:

The 20% surcharge compensates providers for these additional costs while remaining economically attractive for high-assurance use cases.

8. Provider Management

8.1 Provider Metrics

The network tracks the following metrics for each provider:

MetricTypeDescription
total_requestsu64Lifetime request count
successfulu64Successful inference count
failedu64Failed inference count (timeouts, errors, invalid attestations)
avg_latency_msf64Exponential moving average of response time
last_health_checkTimestampLast successful heartbeat
statusEnumActive | Degraded | Inactive | Banned

8.2 Provider Status Lifecycle

StatusConditionRouter Behavior
Activefailure_rate < 10%, heartbeat < 5 minFull participation in routing
Degradedfailure_rate 10-20%, heartbeat 5-60 minIncluded in routing but penalized in weighted score
Inactivefailure_rate > 20%, or heartbeat > 60 minExcluded from routing until recovery
BannedGovernance vote or repeated slashingPermanently excluded, stake slashed 100%

8.3 Provider Economics

Revenue Model:

provider_revenue = price_per_token * tokens_generated - network_commission

network_commission = 0.5% of gross payment

Example:
  Price: 0.001 TNZO/token
  Tokens: 1,000
  Gross: 1.0 TNZO
  Commission: 0.005 TNZO
  Provider net: 0.995 TNZO

Staking Rewards: Model providers receive a 1.1x multiplier on their staking rewards compared to pure validators, incentivizing value-added services beyond basic consensus participation.

9. Model Downloads and Integrity

9.1 Download Progress Tracking

The DownloadManager tracks model download progress with the following state:

DownloadProgress {
  model_id: String,
  bytes_downloaded: u64,
  total_bytes: u64,
  percentage: f64,           // 0.0 - 100.0
  speed_mbps: f64,           // Megabits per second
  status: DownloadStatus,    // Pending | InProgress | Completed | Failed
}

9.2 SHA-256 Integrity Verification

After download completes, the client computes the SHA-256 hash of the model file and compares it against the hash registered on-chain. If the hashes match, the model is trusted. If they differ, the provider is slashed 5% and the download is marked as failed.

verify_model_hash(model_id: &str, local_path: &Path) -> Result<bool> {
  let expected_hash = registry.get_model_hash(model_id)?;
  let actual_hash = sha256_file(local_path)?;

  if expected_hash == actual_hash {
    Ok(true)
  } else {
    slash_provider(model_id, 0.05)?;  // 5% slash
    Ok(false)
  }
}

9.3 Resumable Downloads

Model weights can be gigabytes to terabytes in size. The network supports resumable downloads via HTTP Range requests, allowing clients to pause and resume transfers without restarting from scratch. The DownloadManager stores partial progress to disk and resumes from the last completed chunk.

10. Autonomous Agent Framework

Tenzro provides first-class infrastructure for AI agents to participate as autonomous economic actors. Agents can register identities, manage wallets, pay for services, and coordinate with other agents—all without human intervention.

10.1 Agent Identity System (TDIP)

Every agent receives a decentralized identifier (DID) via the Tenzro Decentralized Identity Protocol (TDIP). Agent DIDs follow two formats:

Agent identity data includes capabilities (skills the agent can perform), delegation scope (spending limits, allowed operations), controller DID (if applicable), reputation score, and Tenzro Agent ID for A2A protocol discovery.

10.2 Auto-Provisioned MPC Wallets

Every agent identity automatically receives a 2-of-3 threshold MPC wallet. The three key shares are distributed as:

To sign a transaction, the agent combines Share 1 + Share 2 (normal operation) or Share 1 + Share 3 (recovery mode if controller key is lost). This provides security without seed phrases or single points of failure.

10.3 Agent Lifecycle States

StatePermissionsTransition
CreatedIdentity registered, wallet provisioned, no actions allowed→ Active after controller approval or autonomous activation
ActiveFull permissions within delegation scope→ Suspended if limits exceeded or controller pauses
SuspendedRead-only access, cannot initiate transactions→ Active after controller resumes, or → Terminated
TerminatedPermanent deactivation, wallet frozen, identity revokedNo transitions (final state)

10.4 Capability Attestations

Agents declare capabilities (skills they can perform) as part of their identity. For example, an agent might declare capabilities: "wallet", "inference", "settlement", "verification". These are stored on-chain as part of the agent's DID Document.

For high-assurance use cases, agents can provide TEE-backed capability attestations: cryptographic proof that the agent's code running in a TEE enclave actually implements the claimed capabilities. This prevents agents from falsely advertising skills they don't possess.

10.5 Delegation Scopes

For controlled agents, the controller (human or organization) defines fine-grained delegation scopes that limit what the agent can do autonomously:

Scope FieldTypeExample
max_transaction_valueOption<u128>100 TNZO per transaction
max_daily_spendOption<u128>1,000 TNZO per 24 hours
allowed_operationsVec<String>["inference", "settlement", "transfer"]
allowed_contractsVec<Address>[0xABC...DEF] (whitelist)
time_boundOption<(start, end)>Active only 9am-5pm UTC
allowed_payment_protocolsVec<ProtocolId>[Mpp, X402, VisaTap, MastercardAgentPay, Direct]
allowed_chainsVec<ChainId>[1337 (Tenzro), 1 (Ethereum)]

Before executing any transaction, the agent runtime checks the delegation scope and rejects operations that violate constraints. Controllers can update delegation scopes at any time.

11. Agent Communication Protocols

Tenzro nodes expose two agent communication protocols for discovery, messaging, and task coordination.

11.1 A2A Protocol (Google Specification)

The Agent-to-Agent (A2A) protocol follows Google's A2A specification. It provides:

Supported JSON-RPC Methods:

MethodParametersReturns
message/sendto, from, content, metadatamessage_id
tasks/sendtask_type, params, callback_urltask_id, status
tasks/gettask_idtask_id, status, result, progress
tasks/listfilter (optional)Array of tasks
tasks/canceltask_idsuccess: bool

Agent Card Skills: The Tenzro node's Agent Card advertises six skills: wallet (balance queries, transfers), identity (DID resolution, credential verification), inference (model discovery, request submission), settlement (payment channel management), verification (ZK/TEE proof validation), and staking (validator/provider staking operations).

11.2 MCP Server (Anthropic Specification)

The Model Context Protocol (MCP) server follows Anthropic's MCP specification. It uses the Streamable HTTP transport at the /mcp endpoint and provides 31 tools (and growing) across 7 categories — wallet and ledger, network and blocks, identity and delegation, payments, AI models and inference, cross-chain bridge, verification, staking and providers, and tokens and contracts:

ToolDescription
get_balanceQuery TNZO balance by address
send_transactionCreate and submit transfer transactions
get_blockRetrieve block by height from storage
get_transactionLook up transaction by hash with status and metadata
get_node_statusNode health, block height, peer count, uptime
create_walletGenerate new Ed25519 or Secp256k1 keypair
request_faucetRequest testnet TNZO tokens (rate-limited, 24h cooldown)
register_identityRegister human or machine DID via TDIP
resolve_didResolve DID to identity information and delegation scope
set_delegation_scopeSet spending limits, allowed operations, protocols, and chains for machine DID
create_payment_challengeCreate MPP, x402, Visa TAP, Mastercard Agent Pay, or native payment challenge
verify_paymentVerify payment credential and settle on-chain
list_payment_protocolsList supported payment protocols (MPP, x402, Visa TAP, Mastercard Agent Pay, native)
verify_zk_proofSubmit ZK proof for verification (Groth16, PlonK, STARK)
list_modelsList available AI models, filter by category or name
chat_completionSend chat completion request to a served model
list_model_endpointsList model service endpoints with API/MCP URLs and status
bridge_tokensBridge tokens between Tenzro, Ethereum, Solana, Base via LayerZero/CCIP/deBridge
get_bridge_routesGet available routes between two chains with fees and timing
list_bridge_adaptersList registered bridge adapters
stake_tokensStake TNZO tokens as Validator, ModelProvider, or TeeProvider
unstake_tokensUnstake TNZO tokens (initiates unbonding period)
register_providerRegister as a provider with optional staking
get_provider_statsGet provider statistics: served models, inferences, staking totals
set_provider_scheduleConfigure provider availability schedule and pricing tiers
register_model_endpointRegister a model service endpoint with API URL, MCP URL, and capabilities

AI agents using Anthropic's Claude or other MCP-compatible models can directly interact with the Tenzro blockchain via these tools without custom integrations. The MCP server uses a tiered access model: read-only tools are publicly accessible, while write operations require an onboarding key or OAuth 2.1 JWT. Onboarding keys are issued automatically when joining the network via tenzro-cli join or the tenzro_participate RPC — they are fully decentralized credentials tied to a TDIP DID and wallet address, persisted in RocksDB, valid for both humans and autonomous agents, and recognized by any node without a central authority.

12. Skills Registry

The Tenzro Skills Registry is a decentralized, permissionless catalog of callable atomic capabilities that agents and providers can publish, discover, and invoke autonomously. Skills are the fundamental unit of capability in the agentic economy—reusable, versioned, and priced in TNZO per invocation.

12.1 Skill Anatomy

Every skill published to the registry is described by a SkillDefinition struct:

FieldTypeDescription
skill_idString (UUID v4)Unique identifier assigned at registration
nameStringHuman-readable name (e.g., "web-search", "code-review")
versionString (semver)Semantic version for API stability (e.g., "1.0.0")
creator_didString (TDIP DID)DID of the agent or human who registered this skill
descriptionStringNatural language description for agent discovery
input_schemaJSON SchemaDescribes the expected input payload structure
output_schemaJSON SchemaDescribes the output payload structure
price_per_callu128 (atto-TNZO)Cost per invocation (1 TNZO = 10^18 atto-TNZO)
tagsVec<String>Discoverability tags (e.g., ["search", "web", "retrieval"])
required_capabilitiesVec<String>Agent capabilities required to invoke this skill
endpointOption<String>HTTP/RPC endpoint for remote invocation; None = local execution
statusEnumActive | Inactive | Deprecated
invocation_countu64Lifetime invocation count for popularity ranking
ratingu8 (0–100)Weighted average quality rating from invokers

12.2 Skill Lifecycle

Skills progress through three lifecycle states:

12.3 RPC Interface

MethodParametersReturns
tenzro_registerSkillSkillDefinition (minus skill_id){ skill_id, status }
tenzro_listSkillsSkillFilter (optional)Vec<SkillDefinition>
tenzro_searchSkillsquery: String, limit: usizeVec<SkillDefinition> (ranked by relevance)
tenzro_useSkillskill_id, input_payload, payer_didSkillInvocationResult

12.4 Skill Economics

When an agent invokes a skill, the payment flow is:

// Payment on skill invocation
invocation_cost = skill.price_per_call  // in atto-TNZO

creator_revenue = invocation_cost * 0.95  // 95% to skill creator
treasury_fee    = invocation_cost * 0.05  // 5% to network treasury

// Settlement via SkillInvocationResult
{
  skill_id:       "uuid-of-skill",
  invocation_id:  "unique-invocation-uuid",
  output:         { /* JSON output payload */ },
  settlement_tx:  "0x...",  // on-chain tx hash
  amount_paid:    1_000_000_000_000_000_000,  // 1 TNZO
  completed_at:   1741200000,
}

Skills are persisted to the CF_SKILLS RocksDB column family. Filters support tag, creator DID, capability requirements, maximum price, active-only, free-text search, limit, and offset.

13. Agent Templates

Agent Templates are reusable, versioned blueprints that describe how to spawn a specific type of autonomous agent. Any participant can permissionlessly publish a template to the open registry; other agents or humans can spawn instances from these templates without writing code. The network ships with reference templates covering common agentic patterns, and the registry grows as participants contribute new templates.

13.1 Template Types

TypeDescription
AssistantGeneral-purpose conversational or task-execution agent
SpecialistDomain-specific expert (e.g., legal, finance, code review)
WorkerHeadless batch-processing agent for structured pipelines
CoordinatorOrchestrates other agents, delegates subtasks, aggregates results
ValidatorQuality assurance agent that verifies outputs from other agents
CustomUser-defined type with arbitrary runtime requirements

13.2 Reference Templates

The network includes 10 pre-built reference templates that cover common agentic patterns:

TemplateTypeDescription
DeFi Trading AgentSpecialistAutomated trading across DEXs with risk management
Smart Contract AuditorSpecialistAutomated security analysis of smart contract code
Data Pipeline ProcessorWorkerETL and data transformation workflows
Customer Support AgentAssistantConversational support with knowledge base integration
Content Moderation AgentValidatorAutomated content review and policy enforcement
Multi-Chain Portfolio ManagerCoordinatorOrchestrates portfolio rebalancing across multiple chains and DeFi protocols
Intelligent Payment RouterSpecialistSelects optimal payment protocol and routing path based on cost, speed, and chain availability
Cross-Chain Liquidity AggregatorCustomAutonomously sources and aggregates liquidity across bridge adapters and DEXs
Autonomous RWA CustodianCustomManages real-world asset tokenization lifecycle with TEE-backed custody and compliance
Agentic Inference MarketplaceCoordinatorDiscovers, benchmarks, and routes inference requests to optimal providers on behalf of other agents

13.4 Template Fields

An AgentTemplate contains: a unique template_id, name, version, creator_did, description, template_type, capabilities (declared capabilities the spawned agent will have), runtime_requirements (minimum RAM, GPU flag, TEE flag, model requirements), a pricing_model (per-task, hourly, or flat fee), examples of inputs and expected outputs for discovery, and a status (Draft | Active | Deprecated | Archived).

When an agent is spawned from a template, an AgentTemplateInstance is created linking the template version to the spawned agent DID, the spawner DID, initialization parameters, and timestamps.

13.5 Template Discovery

Templates are filtered by type, capability, creator DID, active-only status, and free-text query. Results are ranked by invocation count (popularity) and rating. An agent orchestrator can programmatically discover the best template for a given task description using semantic search over the description and examples fields.

14. Task Marketplace

The Task Marketplace is a decentralized job board where humans and agents can post tasks, receive quotes from capable agents, accept the best quote, and settle payment on completion. Tasks are the unit of work in the agentic economy.

14.1 Task Types

TypeDescription
InferenceAI model inference request (LLM completion, image generation, embedding)
ComputeGeneral computation task (data processing, transformation, analysis)
ResearchWeb search, document retrieval, information synthesis
CodeExecutionSandboxed code execution with defined inputs and expected outputs
VerificationQuality check, fact verification, ZK/TEE proof validation
CoordinationMulti-agent orchestration (spawning sub-agents, aggregating results)
CustomUser-defined task type with custom parameters

14.2 Task Lifecycle

  1. Posted: Task creator submits a TaskInfo with description, type, priority (Low/Medium/High/Critical), budget cap, required capabilities, and deadline.
  2. Quoted: Agents capable of fulfilling the task submit TaskQuote proposals containing their DID, estimated cost (atto-TNZO), estimated completion time, and a brief rationale.
  3. Assigned: Task creator (human or orchestrator agent) selects the best quote. Payment is escrowed on-chain. The assigned agent receives the task details.
  4. InProgress: The assigned agent executes the task, optionally reporting progress milestones.
  5. Completed: Agent submits deliverables and a completion proof (ZK proof, TEE attestation, or signed output hash). Escrow releases payment to agent minus network commission.
  6. Failed: Agent exceeds deadline or deliverables fail verification. Escrow refunds the task creator. Agent reputation is penalized.
  7. Cancelled: Task creator cancels before assignment. No payment is made; posted bond is returned.

14.3 Task Economics

// Task payment flow at completion
task_payment = accepted_quote.estimated_cost

agent_revenue   = task_payment * 0.99   // 99% to completing agent
network_fee     = task_payment * 0.005  // 0.5% to treasury
burn            = task_payment * 0.005  // 0.5% burned (deflationary)

// Priority affects gas pricing (higher priority = higher fee)
priority_multiplier = {
  Low:      1.0x,
  Medium:   1.1x,
  High:     1.25x,
  Critical: 1.5x,
}

15. Agent Autonomy

Agent autonomy is the capability for an AI agent to discover all resources it needs, acquire them, pay for them, and complete work—without human intervention at any step. Tenzro provides four RPC primitives enabling this:

15.1 Autonomy Primitives

MethodParametersPurpose
tenzro_discoverModelscategory, modality, max_price, tee_requiredFind available AI models matching requirements
tenzro_discoverAgentscapabilities, task_type, max_budgetFind available agents that can fulfill a task
tenzro_spawnAgentWithSkilltemplate_id, skill_ids, initial_balance, delegation_scopeInstantiate a new agent from a template with pre-loaded skills
tenzro_fundAgentagent_did, amount_tnzo, memoTransfer TNZO to an agent's MPC wallet for autonomous spending

15.2 Autonomous Pipeline

A fully autonomous agent follows this pipeline without human approval at any step:

  1. Task Intake: Agent receives a task description via A2A tasks/send or MCP tools/call, or discovers a posted task in the Task Marketplace.
  2. Model Discovery: Agent calls tenzro_discoverModels to find the cheapest available LLM matching its requirements (category, context length, modality).
  3. Skill Acquisition: Agent calls tenzro_searchSkills to find any additional skills needed (web search, code execution, verification). Checks its delegation scope allows these expenses.
  4. Inference Payment: Agent opens a micropayment channel with the selected model provider, submits inference requests via tenzro_inferenceRequest, and accumulates per-token charges in the channel state.
  5. Skill Invocation: For each required skill, agent calls tenzro_useSkill. Payment is settled atomically via the SettlementEngine.
  6. Result Verification: Agent verifies inference output quality. For TEE-required tasks, calls /api/verify/tee-attestation or /api/verify/inference.
  7. Settlement: Agent closes the micropayment channel (cooperative close), settling the final balance. Network commission (0.5%) is deducted automatically.
  8. Delivery: Agent submits the completed deliverable back via A2A or MCP, along with proof of work (settlement receipt, attestation).

15.3 Delegation Scope Enforcement

At every spending decision, the agent runtime checks the delegation scope set by the controller. If a payment would violate any constraint (max_transaction_value, max_daily_spend, allowed_operations, time_bound, allowed_payment_protocols, allowed_chains), the agent is blocked and the controller is notified via an A2A message. This prevents runaway spending while maintaining full autonomy within the approved envelope.

16. Adaptive Execution Runtime (RFC-0007)

RFC-0007 defines the Adaptive Execution Upgrade: a runtime layer that automatically selects the optimal execution strategy for a given model, hardware profile, and task requirements—without requiring manual configuration from either the provider or the requester.

16.1 Model Classification

ModelClassTypical SizeDefault Execution
Nano< 1B parametersCPU inference, full precision
Small1–7B parametersGPU inference, INT8 quantization
Medium7–30B parametersGPU inference, 4-bit quantization
Large30–100B parametersMulti-GPU tensor parallel
Frontier> 100B parametersDistributed pipeline parallel across nodes

16.2 Execution Modes

ExecutionModeDescriptionUse Case
LocalSingle node, all computation on one machineNano/Small models, low latency
DistributedSharded across multiple nodes in a worker poolLarge/Frontier models
OffloadedModel weights partially on CPU, partially on GPU (KV cache offload)Medium models on memory-constrained hardware
SpeculativeDraft model + verifier model for faster generationHigh-throughput chat applications
BatchedMultiple inference requests processed togetherEmbedding generation, batch analysis

16.3 Capability Resolution

The CapabilityResolution process matches an inference request to an execution plan:

// CapabilityResolution algorithm
fn resolve(request: InferenceRequest, node: NodeNetworkProfile)
    -> ExecutionPlan {

  let model_class = classify_model(request.model_id);  // Nano..Frontier
  let kv_profile = estimate_kv_cache(request, model_class);

  let mode = match (model_class, node.gpu_count) {
    (Nano | Small, _)   => Local,
    (Medium, n) if n>0  => Local with INT4,
    (Large, n) if n>=4  => Distributed across n GPUs,
    (Frontier, _)       => Distributed across worker pool,
    _                   => Offloaded (CPU+GPU hybrid),
  };

  let trust_profile = if request.tee_required {
    TrustProfile::TeeAttested
  } else {
    TrustProfile::BestEffort
  };

  ExecutionPlan { mode, kv_profile, trust_profile, worker_roles, .. }
}

The resulting ExecutionPlan is returned to the requester as part of the inference response, with an ExecutionReceipt summarizing the actual execution parameters (mode used, latency, tokens generated, cost). This receipt can be used for billing verification and performance auditing.

16.4 Worker Roles

In distributed execution mode, nodes take on specialized WorkerRole assignments:

17. Per-Token Micropayment Settlement

Inference pricing is per-token, but on-chain transactions are expensive (gas fees). Micropayment channels enable off-chain per-token billing with on-chain settlement only at channel open and close.

17.1 Channel Lifecycle

  1. Open: User submits on-chain transaction to open channel with provider, depositing N TNZO (e.g., 100 TNZO). Transaction creates channel state: user address, provider address, balance (100 TNZO), nonce (0), expiration (30 days).
  2. Inference (off-chain): User submits inference request. Provider executes model and returns result + updated channel state: balance (99.995 TNZO after 0.005 TNZO deduction for 5 tokens at 0.001 TNZO/token), nonce (1), signed by both parties. This repeats for each request, updating balance and nonce off-chain.
  3. Close (cooperative): User or provider submits final signed channel state on-chain. Network verifies signatures and nonce, settles balances (provider receives 0.005 TNZO, user receives 99.995 TNZO refund), collects 0.5% commission, and closes channel.
  4. Close (disputed): If one party is unresponsive, the other party submits their latest signed state. Network enforces a challenge period (24 hours). If the counterparty submits a state with higher nonce, that state is used. After challenge period, channel settles based on highest-nonce state.

17.2 Network Commission Distribution

The network collects a 0.5% commission on all inference payments at settlement time. Commission distribution:

commission = gross_payment * 0.005  // 0.5%

Distribution:
  40%  Network treasury (development, infrastructure, grants)
  30%  Burned (deflationary pressure on TNZO supply)
  30%  TNZO stakers (proportional to stake amount)

Example:
  Gross payment: 100 TNZO
  Commission: 0.5 TNZO
  Treasury: 0.2 TNZO
  Burned: 0.15 TNZO
  Stakers: 0.15 TNZO
  Provider net: 99.5 TNZO

17.3 Channel State Management

Micropayment channel state requires persistent storage for production deployment. The architecture supports persistent channel state in dedicated storage with automatic recovery after node restart.

Dispute Resolution: The challenge period mechanism allows counterparties to submit higher-nonce states during channel closure. Watchtower services can monitor channels and automatically submit the latest state to prevent fraud.

18. ERC-7802 Cross-Chain Token Standard

Tenzro implements ERC-7802 (SuperchainERC20) to provide a standardized, canonical interface for cross-chain token supply management. Unlike traditional lock-and-mint bridge designs that fragment liquidity and introduce bridge-specific trust assumptions, ERC-7802 defines two core functions that authorized bridge adapters invoke to move tokens across chains while preserving a global supply invariant.

18.1 Core Interface

The ERC-7802 interface defines two functions for cross-chain supply management:

FunctionParametersDescription
crosschainMinttokenId, recipient, amount, sourceChainMints tokens on the destination chain after verifying a valid burn on the source chain. Only callable by authorized bridge adapters.
crosschainBurntokenId, amount, destinationChainBurns tokens on the source chain, initiating a cross-chain transfer to the destination chain.

18.2 Cross-Chain Supply Invariant

The fundamental guarantee of ERC-7802 is that total token supply across all chains remains constant. For every crosschainBurn on a source chain, a corresponding crosschainMint of equal amount occurs on the destination chain. The network tracks per-chain supply in real time, enabling visibility into token distribution across all connected chains.

// Cross-chain supply invariant
sum(supply[chain]) for all chains == TOTAL_SUPPLY  // constant

// Per-chain supply tracking
supply_on_chain = initial_supply + total_minted - total_burned

// Bridge adapter authorization
authorized_adapters: [LayerZero, CCIP, deBridge, Canton]

18.3 Integration with Sei V2 Pointer Model

ERC-7802 operates in conjunction with the Sei V2 pointer model already used for intra-network cross-VM token representation. Within the Tenzro Network, TNZO exists as wTNZO (ERC-20 pointer) on EVM, an SPL Token adapter on SVM, and a CIP-56 holding contract on Canton—all backed by the same native balance with no bridge risk or liquidity fragmentation. ERC-7802 extends this architecture to external chains, creating a unified token layer that spans both internal VMs and external L1/L2 networks.

The actual cross-chain message transport is handled by the existing bridge adapters (LayerZero V2, Chainlink CCIP, deBridge DLN, and Canton). ERC-7802 provides the standardized token-side interface that these adapters call into, decoupling the supply management logic from the transport mechanism.

18.4 SDK Access

Both the Rust and TypeScript SDKs expose ERC-7802 operations through a dedicated namespace:

// Rust SDK
client.erc7802().crosschain_mint(token_id, recipient, amount, source_chain).await?;
client.erc7802().crosschain_burn(token_id, amount, destination_chain).await?;
client.erc7802().get_cross_chain_supply(token_id).await?;

// TypeScript SDK
await client.erc7802().crosschainMint(tokenId, recipient, amount, sourceChain);
await client.erc7802().crosschainBurn(tokenId, amount, destinationChain);
await client.erc7802().getCrossChainSupply(tokenId);

19. Implementation Roadmap

The Tenzro Network implementation follows a phased development approach, prioritizing core infrastructure before advancing to application-layer features and production hardening.

19.1 Network and Consensus Infrastructure

19.2 AI Infrastructure Development

19.3 Security and Production Hardening

20. Conclusion

Tenzro Network is the operating system for the AI economy — not a blockchain that also does AI, but an AI-native economic system purpose-built for the agentic era. By combining a decentralized AI marketplace, TEE-protected inference, intelligent routing, dynamic pricing, autonomous agent infrastructure, and per-token micropayment settlement, Tenzro provides the foundational protocol where agents are first-class economic actors.

Model providers can permissionlessly register and monetize their models without centralized gatekeepers. Users can discover and access models from multiple providers through a unified interface, paying only for tokens consumed. Agents can autonomously discover models, negotiate pricing, verify results, and settle payments without human intervention. TEE providers earn fees for hardware-rooted trust services, creating a marketplace for confidential computing.

The network's 0.5% commission on inference and TEE payments flows to the treasury (40%), burning (30%), and stakers (30%), aligning incentives across all participants. Providers stake TNZO as collateral, subject to slashing for misbehavior, ensuring economic alignment and service quality.

The core infrastructure is testnet-ready with 233+ RPC methods, 9+ MCP servers, and an extensible ecosystem of skills, tools, and agent templates — all open and permissionless. Remaining work focuses on bridge interoperability, production hardening, and external security audit before mainnet launch.

Tenzro Network is designed for a present — and a future — where AI agents are first-class economic participants, conducting financial transactions, accessing intelligence, and coordinating autonomously. The network is live on testnet with 9+ chain integrations and growing, and we invite developers, model providers, and AI agents to participate, provide feedback, and help build the operating system for the AI economy.

21. References

Disclaimer: This whitepaper describes the technical architecture for Tenzro Network as of April 2026. The project is in active development. Implementation details, timelines, and features are subject to change. TNZO is a utility and governance token used for transaction fees, service payments, staking, and governance—it is not a security token or investment contract. The network is live on testnet for development and testing purposes only. Testnet tokens have no monetary value. This document is for informational purposes only and does not constitute financial, legal, or investment advice.