Tenzro Network: The Operating System for the AI Economy
April 2026
Abstract
Tenzro Network is the operating system for the AI economy — an AI-native economic system purpose-built for the agentic era, where agents are first-class economic actors. The protocol layer provides two fundamental capabilities to all participants—humans and AI agents alike:
- Access to Intelligence — a decentralized marketplace for AI inference where anyone can permissionlessly serve models or consume intelligence, with verifiable results, intelligent routing, and per-token micropayment billing.
- Access to Security — Trusted Execution Environment (TEE) enclave services for key management, custody, confidential computing, and verifiable inference, provided as a decentralized marketplace with cryptographic attestation.
The network is designed from first principles for an era where autonomous agents are first-class participants in the economy. Agents can autonomously discover models, negotiate with providers, execute inference requests, manage their own wallets, and settle payments—all without human intervention. The Tenzro Ledger (settlement layer) provides identity, verification, and settlement infrastructure. All payments are denominated in TNZO, the network's utility and governance token.
This whitepaper focuses on the Network layer: the AI marketplace architecture, TEE service registry, inference routing strategies, dynamic pricing, autonomous agent framework, and micropayment settlement mechanisms. For the underlying blockchain consensus and execution layer, see the Tenzro Ledger whitepaper. For the overall ecosystem vision, see the Tenzro Protocol whitepaper.
1. The AI Age Problem
The current AI infrastructure is fundamentally centralized. A handful of companies—OpenAI, Anthropic, Google, Meta—control access to frontier models, running on a handful of cloud providers, in a handful of regions. For most of the history of AI development, this was a reasonable architectural choice. It no longer is.
1.0 Concentration Risk Is Now Real
AI is no longer a tool. It is becoming part of workflows, part of decision-making, part of economic systems. That shift changes the stakes of infrastructure concentration entirely.
Most AI workloads today depend on a small number of providers and regions. That creates single points of failure at every layer: a provider outage cascades into application downtime; a pricing change becomes a budget crisis overnight; a jurisdictional dispute or policy decision can restrict access entirely. Recent geopolitical dynamics — US-China technology tensions, Middle East infrastructure constraints, EU data sovereignty requirements — have made these risks concrete rather than theoretical. Infrastructure that was designed for convenience is now treated as critical, without ever having been hardened for it.
The challenge is not ideological. Decentralization for its own sake is not the answer. The answer is infrastructure designed from first principles for resilience: redundant provider routing, no single jurisdiction dependency, cryptographic verification that doesn't require trusting any individual operator. The world became ready for this architecture at the same time the need for it became urgent.
Beyond resilience, centralized AI infrastructure creates several technical problems for an autonomous AI economy that cannot be solved by adding more capacity to the same concentration:
1.1 No Permissionless Model Serving
If you train or fine-tune a model, there is no decentralized marketplace where you can register it and start earning revenue. You must either deploy to a centralized platform (which takes a cut, sets the pricing, and can deplatform you) or build your own infrastructure (requiring upfront capital, marketing, and user acquisition).
Users, in turn, have no unified interface to discover models across providers. They must maintain separate API keys, billing relationships, and client integrations for each provider. There is no competitive marketplace driving down prices or improving quality through reputation systems.
1.2 No Verifiable Inference
When you send a prompt to an API and receive a response, you have no cryptographic proof that the claimed model actually produced that output. The provider could be running a cheaper, smaller model and charging you for a larger one. Or they could inject bias, censorship, or malicious content without detection.
Traditional blockchains cannot solve this because AI inference is non-deterministic (same input can produce different outputs) and computationally expensive (re-executing a billion-parameter model on-chain is economically infeasible). There is no mechanism to verify inference results without trusting the provider.
1.3 No Hardware-Rooted Trust for AI Execution
Even if a provider claims to run a specific model, you cannot verify what code is actually executing on their hardware. A malicious operator could modify the inference code to exfiltrate prompts, manipulate outputs, or steal intellectual property embedded in the model weights—all while producing valid API responses.
Trusted Execution Environments (TEEs) like Intel TDX, AMD SEV-SNP, AWS Nitro Enclaves, and NVIDIA GPU Confidential Computing provide hardware-based attestation: cryptographic proof signed by the CPU that specific code is running in an isolated, tamper-resistant environment. But there is no decentralized network for discovering and accessing TEE services.
1.4 No Agent-Native Payment Infrastructure
AI inference pricing is fundamentally per-token (each token generated costs compute). But payment infrastructure is built for per-transaction or per-session billing. Micropayment channels exist in theory but are not standardized or widely deployed for AI use cases.
Autonomous agents need to pay for inference without human approval for every request. They need fine-grained delegation scopes ("this agent can spend up to 100 TNZO per day on inference, only on these models, only for these operations"). Current blockchain wallets and payment protocols do not support this level of granularity.
1.5 Agents Cannot Autonomously Discover and Negotiate
In an agentic economy, agents need to discover available models, compare prices and latency across providers, negotiate terms, open payment channels, execute inference requests, verify results, and close channels—all without human intervention. Current infrastructure treats agents as second-class citizens, requiring humans to manually configure API keys, billing accounts, and access permissions.
1.6 Why Decentralized Agentic Commerce
The agentic commerce landscape is rapidly consolidating around centralized platforms. Visa's Trusted Agent Protocol routes agent payments through Visa's card network, with Visa controlling agent identity registration, fraud detection, and transaction authorization. Stripe's Machine Payments Protocol (MPP) enables machine-to-machine HTTP 402 payments, but Stripe remains the intermediary for credential verification and settlement. Google's Agentic Payments Protocol (AP2) provides agent-to-agent payment sessions, but ties agents to Google's infrastructure for session management and identity. OpenAI's Agentic Commerce Protocol powers Stripe Instant Checkout for agent-driven purchases, with OpenAI and Stripe jointly controlling the payment flow. Coinbase's x402 protocol enables stablecoin HTTP payments, but the Coinbase Developer Platform (CDP) serves as the facilitator for verification and settlement. Mastercard's Agent Pay SDK orchestrates enterprise agent payments through Mastercard's network.
Each of these systems solves a real problem. But they all share a structural limitation: a single entity controls identity, settlement, or access. If Visa revokes an agent's token, the agent cannot transact. If Stripe's API goes down, MPP payments halt. If Google deprecates AP2, agents lose their payment sessions. Agents on these platforms are not autonomous economic actors — they are clients of a platform that can unilaterally change terms, revoke access, or shut down.
Tenzro is the decentralized alternative. Agents on Tenzro own their identity through TDIP (W3C DID-based, not a merchant ID issued by a payment network). They hold their own funds in MPC wallets (not custodied by a third party). They run their own models through on-node inference (not dependent on a single API provider). They settle autonomously on-chain (not through a centralized facilitator). The network is permissionless — anyone can run a node, serve models, provide TEE compute, and earn TNZO. No single entity can revoke an agent's identity, freeze its funds, or deny it access to intelligence.
Tenzro does not compete with Visa, Stripe, or Google at the application layer. It operates at the infrastructure layer — and integrates with all of them. Tenzro nodes can route payments through MPP (Stripe), x402 (Coinbase), Tempo, Visa TAP, and Mastercard Agent Pay, while settling the final balance on a decentralized ledger. This gives agents the best of both worlds: access to existing payment rails where needed, with the guarantee that no single intermediary controls their economic participation.
2. The Tenzro Network Solution
Tenzro Network is the operating system for the AI economy — a decentralized, permissionless protocol for accessing intelligence and security. It consists of three layers: the Tenzro Network (protocol layer providing intelligence, security, and agent infrastructure), the Tenzro Ledger (L1 settlement layer providing transactions, verification, and compliance), and TNZO (the economic unit powering it all). The protocol layer provides two parallel marketplaces:
2.1 Decentralized AI Marketplace (Access to Intelligence)
Model providers register their offerings in an on-chain registry with metadata: model name, version, category (LLM, ImageGen, Speech, Embedding, Custom), modality (Text, Image, Audio, Multimodal), price per token, minimum stake requirement, TEE requirement, supported formats, max context length, and parameter count.
Users (humans or agents) discover models through a unified interface—CLI, desktop app, or SDK—similar to ChatGPT or Claude. The open marketplace supports 40+ models and growing, spanning model families like Gemma, Qwen, Phi, and Mistral. The Rust and TypeScript SDKs provide comprehensive coverage across wallet management, identity operations, model discovery, inference, payments, staking, governance, agent lifecycle, provider management, and cross-chain bridging. The network exposes 233+ RPC methods across 12 namespaces, 9+ MCP servers with 130+ tools, and 23 A2A skills. Inference requests are routed to providers based on configurable strategies: lowest price, lowest latency, highest reputation, random, or weighted combinations. Providers execute the model and return results, optionally accompanied by zero-knowledge proofs or TEE attestations for verifiable inference.
Billing operates on a per-token basis using micropayment channels: users open channels with providers by depositing TNZO, providers deduct fractional amounts for each token generated, and either party can close the channel to settle the final balance on-chain. The network collects a 0.5% commission on all inference payments.
2.2 TEE Enclave Services (Access to Security)
TEE providers register their hardware capabilities (Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA H100/H200/B100/B200) and offer services including key management, custody, confidential computing, secure multi-party computation (MPC), and verifiable inference inside GPU-backed TEEs.
Users can request TEE attestations to prove that inference ran inside a trusted enclave with the claimed model code. Agents can store key shares in distributed TEEs for threshold signing (2-of-3 MPC wallets). The network collects a 0.5% commission on TEE service payments, distributed to the treasury, stakers, and burning.
2.3 Autonomous Agent Framework
In the Tenzro operating system, AI agents are first-class economic actors — not second-class API consumers. Every agent receives a machine identity (DID via TDIP), an auto-provisioned MPC wallet (no seed phrases), and delegation scopes defining spending limits, allowed operations, allowed models, allowed payment protocols, and time-based constraints. Agents can own assets, earn revenue, pay for services, and participate in governance.
Agents communicate via the A2A (Agent-to-Agent) protocol and MCP (Model Context Protocol), enabling discovery, task delegation, and inter-agent coordination. The network supports an extensible ecosystem of agent templates, skills, and tools — all discoverable and invocable on-chain. An autonomous agent can join the network, discover models, pay for inference, verify results, and settle payments without any human in the loop.
3. Decentralized AI Marketplace
3.1 Model Registry
The model registry is an on-chain catalog storing metadata for all registered models. Model providers submit a registration transaction containing:
| Field | Type | Description |
|---|---|---|
| model_id | String | Unique identifier (e.g., "anthropic/claude-3-opus") |
| name | String | Human-readable name |
| description | String | Model capabilities and use cases |
| version | String | Semantic version (e.g., "1.2.0") |
| category | Enum | LLM | ImageGen | Speech | Embedding | Custom |
| modality | Enum | Text | Image | Audio | Multimodal |
| provider | Address | Provider's blockchain address |
| price_per_token | u128 (18 decimals) | Cost in TNZO per token generated |
| min_stake | u128 | Minimum TNZO stake required to serve this model |
| tee_required | bool | Whether TEE attestation is mandatory |
| supported_formats | Vec<String> | Input/output formats (e.g., ["json", "stream"]) |
| max_context_length | u64 | Maximum context window in tokens |
| parameters | Option<u64> | Parameter count (e.g., 175B for GPT-3) |
3.2 Provider Stake Requirements
To prevent spam and ensure economic alignment, model providers must stake TNZO as collateral before serving models. Stake requirements vary by category, reflecting the computational and capital intensity of different model types:
| Category | Min Stake (TNZO) | Rationale |
|---|---|---|
| LLM | 100,000 | High compute, high-value use cases |
| ImageGen | 50,000 | GPU-intensive, moderate context |
| Embedding | 25,000 | Lower compute, high throughput |
| Speech | 25,000 | Specialized hardware, streaming |
| Custom | 10,000 | Experimenting with novel models |
Staked TNZO can be slashed for misbehavior through automatic enforcement. Slashing conditions include:
- Invalid TEE attestation: 10% slash
- Persistent downtime (>24h without heartbeat): 1% slash
- Fraudulent inference results (ZK proof verification failure): 25% slash
- Model hash mismatch (serving different model than registered): 5% slash
- Rate limit violations (spam, DoS): 0.1% slash per incident
Providers can unbond their stake with a 7-day unbonding period, during which they cannot serve new requests but must continue servicing active channels.
3.3 Model Discovery and Filtering
Users discover models through filtering criteria:
- Category: LLM, ImageGen, Speech, Embedding, Custom
- Modality: Text, Image, Audio, Multimodal
- Provider address: Filter by specific provider
- TEE requirement: Only show TEE-attested models
- Price range: Min/max price per token
4. Intelligent Inference Routing
When a user submits an inference request, the network routes it to a provider based on a configurable strategy. The InferenceRouter supports five strategies:
4.1 Routing Strategies
| Strategy | Selection Criteria | Use Case |
|---|---|---|
| Lowest Price | Provider with minimum price_per_token | Cost-sensitive batch processing |
| Lowest Latency | Provider with minimum avg_latency_ms | Real-time applications, chatbots |
| Highest Reputation | Provider with max(successful / total_requests) | Mission-critical inference, high reliability |
| Random | Uniform random selection | Load balancing, testing new providers |
| Weighted Score | Linear combination of price, latency, reputation | Balanced optimization across multiple dimensions |
4.2 Weighted Scoring Formula
The weighted score strategy computes a score for each provider using normalized metrics:
score = w_price * (1 - norm_price)
+ w_latency * (1 - norm_latency)
+ w_reputation * norm_reputation
where:
norm_price = (price - min_price) / (max_price - min_price)
norm_latency = (latency - min_latency) / (max_latency - min_latency)
norm_reputation = reputation // already in [0, 1]
Default weights:
w_price = 0.4
w_latency = 0.3
w_reputation = 0.3Higher scores are better. The formula inverts price and latency (lower is better) but keeps reputation as-is (higher is better). Weights sum to 1.0 and can be customized per user or agent.
4.3 Provider Pool Filtering
Before applying the routing strategy, the router filters the provider pool to exclude unsuitable candidates:
- Status: Only Active or Degraded providers are considered. Inactive and Banned providers are excluded.
- Circuit breaker: Providers with Open circuit breakers are excluded (see Section 5).
- TEE constraint: If the request requires TEE attestation, only TEE-capable providers are selected.
- Stake threshold: Providers must meet the minimum stake for the model category.
If the filtered pool is empty, the request fails with an error. Users can adjust their constraints (e.g., remove TEE requirement) and retry.
5. Circuit Breaker Pattern
The network implements a circuit breaker pattern to isolate failing providers and prevent cascading failures. Each provider has a circuit breaker in one of three states:
5.1 Circuit Breaker States
| State | Behavior | Transition Condition |
|---|---|---|
| Closed | Normal operation, provider receives requests | → Open if failure_count >= threshold (5) |
| Open | Provider excluded from routing, no requests sent | → Half-Open after timeout_duration (60s) |
| Half-Open | Testing recovery with limited requests (max 1) | → Closed on success, → Open on failure |
5.2 Configuration
CircuitBreakerConfig {
failure_threshold: 5, // Open after 5 consecutive failures
timeout_duration: 60s, // Wait 60s before testing recovery
half_open_max_requests: 1, // Allow 1 request in Half-Open state
}6. Dynamic Pricing Engine
Inference pricing is computed dynamically based on multiple factors. The PricingEngine calculates the total cost for an inference request using the following components:
6.1 Pricing Components
| Component | Formula / Range | Description |
|---|---|---|
| Base Rate | model.price_per_token | Provider-set base price in TNZO |
| Model Complexity | 1.0x – 3.0x | Multiplier based on parameter count and modality |
| TEE Surcharge | +20% | Additional cost for TEE-attested inference |
| Network Congestion | 0.5x – 2.0x | Dynamic factor based on network utilization |
| Stablecoin Conversion | Oracle exchange rate | TNZO → USDC/USDT conversion for multi-asset payment |
6.2 Total Cost Formula
total_cost = (base_rate * token_count * complexity_multiplier * congestion_factor)
+ tee_surcharge
where:
base_rate = model.price_per_token (u128, 18 decimals)
token_count = number of tokens generated
complexity_multiplier = 1.0 + (log10(parameters_billions) * 0.3)
// Scales smoothly with model size
congestion_factor = 0.5 + (network_utilization * 1.5)
// network_utilization in [0, 1]
tee_surcharge = tee_required ? total_cost * 0.20 : 06.3 Multi-Asset Payment
Users can pay for inference in TNZO, USDC, or USDT. The pricing engine queries an on-chain oracle for exchange rates and converts the TNZO-denominated price to the requested payment asset. All settlements occur on-chain in the native payment asset, with the network commission (0.5%) collected in TNZO after conversion.
7. TEE-Protected Inference
For use cases requiring verifiable inference (e.g., compliance, high-stakes decisions, proprietary prompts), providers can execute models inside Trusted Execution Environments and return cryptographic attestations proving code integrity.
7.1 Supported TEE Platforms
| Platform | Attestation Type | Use Case |
|---|---|---|
| Intel TDX (Trust Domain Extensions) | DCAP (Data Center Attestation Primitives) | CPU-based confidential VMs |
| AMD SEV-SNP (Secure Encrypted Virtualization) | ASP (AMD Secure Processor) | CPU-based confidential VMs |
| AWS Nitro Enclaves | ACM (AWS Certificate Manager) | Cloud-native confidential compute |
| NVIDIA GPU (H100, H200, B100, B200, Ada Lovelace) | NRAS (NVIDIA Remote Attestation Service) | GPU-accelerated AI inference in TEE |
7.2 Confidential Inference Flow
- User submits inference request with
tee_required: true - Router selects a TEE-capable provider (filtered by TEE hardware support)
- Provider loads model weights into TEE enclave (memory encrypted by CPU/GPU)
- Provider generates attestation report signed by hardware root key, containing:
- Hash of inference code running in enclave
- Hash of model weights loaded into memory
- TEE platform (Intel TDX, AMD SEV-SNP, AWS Nitro, NVIDIA GPU)
- Timestamp of attestation
- Provider public key
- Provider executes inference inside enclave (inputs never exposed in plaintext)
- Provider returns inference result + attestation report to user
- User (or network verifier) validates attestation:
- Verify signature against Intel/AMD/AWS/NVIDIA certificate chain
- Check code hash matches expected inference runtime
- Check model hash matches registered model weights
- Verify timestamp is within acceptable window (24h for NVIDIA NRAS)
- If attestation is valid, user accepts result and settles payment
- If attestation is invalid, user rejects result and slashes provider (10% stake)
- Network records attestation on-chain for auditability
7.3 TEE Surcharge Economics
TEE-protected inference costs 20% more than non-TEE inference due to:
- Hardware costs: TEE-capable CPUs (Intel Xeon Scalable with TDX, AMD EPYC with SEV-SNP) and GPUs (NVIDIA H100/H200, B100/B200) carry a premium.
- Encryption overhead: Memory encryption (Intel TME, AMD SME, NVIDIA MIG) adds 5-15% compute latency.
- Attestation latency: Generating and verifying attestation reports adds 100-500ms per request.
- Enclave memory constraints: TEEs have limited secure memory, restricting model size depending on hardware configuration.
The 20% surcharge compensates providers for these additional costs while remaining economically attractive for high-assurance use cases.
8. Provider Management
8.1 Provider Metrics
The network tracks the following metrics for each provider:
| Metric | Type | Description |
|---|---|---|
| total_requests | u64 | Lifetime request count |
| successful | u64 | Successful inference count |
| failed | u64 | Failed inference count (timeouts, errors, invalid attestations) |
| avg_latency_ms | f64 | Exponential moving average of response time |
| last_health_check | Timestamp | Last successful heartbeat |
| status | Enum | Active | Degraded | Inactive | Banned |
8.2 Provider Status Lifecycle
| Status | Condition | Router Behavior |
|---|---|---|
| Active | failure_rate < 10%, heartbeat < 5 min | Full participation in routing |
| Degraded | failure_rate 10-20%, heartbeat 5-60 min | Included in routing but penalized in weighted score |
| Inactive | failure_rate > 20%, or heartbeat > 60 min | Excluded from routing until recovery |
| Banned | Governance vote or repeated slashing | Permanently excluded, stake slashed 100% |
8.3 Provider Economics
Revenue Model:
provider_revenue = price_per_token * tokens_generated - network_commission
network_commission = 0.5% of gross payment
Example:
Price: 0.001 TNZO/token
Tokens: 1,000
Gross: 1.0 TNZO
Commission: 0.005 TNZO
Provider net: 0.995 TNZOStaking Rewards: Model providers receive a 1.1x multiplier on their staking rewards compared to pure validators, incentivizing value-added services beyond basic consensus participation.
9. Model Downloads and Integrity
9.1 Download Progress Tracking
The DownloadManager tracks model download progress with the following state:
DownloadProgress {
model_id: String,
bytes_downloaded: u64,
total_bytes: u64,
percentage: f64, // 0.0 - 100.0
speed_mbps: f64, // Megabits per second
status: DownloadStatus, // Pending | InProgress | Completed | Failed
}9.2 SHA-256 Integrity Verification
After download completes, the client computes the SHA-256 hash of the model file and compares it against the hash registered on-chain. If the hashes match, the model is trusted. If they differ, the provider is slashed 5% and the download is marked as failed.
verify_model_hash(model_id: &str, local_path: &Path) -> Result<bool> {
let expected_hash = registry.get_model_hash(model_id)?;
let actual_hash = sha256_file(local_path)?;
if expected_hash == actual_hash {
Ok(true)
} else {
slash_provider(model_id, 0.05)?; // 5% slash
Ok(false)
}
}9.3 Resumable Downloads
Model weights can be gigabytes to terabytes in size. The network supports resumable downloads via HTTP Range requests, allowing clients to pause and resume transfers without restarting from scratch. The DownloadManager stores partial progress to disk and resumes from the last completed chunk.
10. Autonomous Agent Framework
Tenzro provides first-class infrastructure for AI agents to participate as autonomous economic actors. Agents can register identities, manage wallets, pay for services, and coordinate with other agents—all without human intervention.
10.1 Agent Identity System (TDIP)
Every agent receives a decentralized identifier (DID) via the Tenzro Decentralized Identity Protocol (TDIP). Agent DIDs follow two formats:
- Controlled agent:
did:tenzro:machine:{controller}:{uuid}— Agent operates under human or organizational control with delegated permissions - Autonomous agent:
did:tenzro:machine:{uuid}— Agent operates independently without hierarchical control
Agent identity data includes capabilities (skills the agent can perform), delegation scope (spending limits, allowed operations), controller DID (if applicable), reputation score, and Tenzro Agent ID for A2A protocol discovery.
10.2 Auto-Provisioned MPC Wallets
Every agent identity automatically receives a 2-of-3 threshold MPC wallet. The three key shares are distributed as:
- Share 1: Agent runtime (stored in TEE enclave on agent's execution environment)
- Share 2: Controller (human or organizational wallet, for controlled agents) or Agent backup (encrypted keystore, for autonomous agents)
- Share 3: Network guardian (Tenzro validator quorum, for recovery in case of key loss)
To sign a transaction, the agent combines Share 1 + Share 2 (normal operation) or Share 1 + Share 3 (recovery mode if controller key is lost). This provides security without seed phrases or single points of failure.
10.3 Agent Lifecycle States
| State | Permissions | Transition |
|---|---|---|
| Created | Identity registered, wallet provisioned, no actions allowed | → Active after controller approval or autonomous activation |
| Active | Full permissions within delegation scope | → Suspended if limits exceeded or controller pauses |
| Suspended | Read-only access, cannot initiate transactions | → Active after controller resumes, or → Terminated |
| Terminated | Permanent deactivation, wallet frozen, identity revoked | No transitions (final state) |
10.4 Capability Attestations
Agents declare capabilities (skills they can perform) as part of their identity. For example, an agent might declare capabilities: "wallet", "inference", "settlement", "verification". These are stored on-chain as part of the agent's DID Document.
For high-assurance use cases, agents can provide TEE-backed capability attestations: cryptographic proof that the agent's code running in a TEE enclave actually implements the claimed capabilities. This prevents agents from falsely advertising skills they don't possess.
10.5 Delegation Scopes
For controlled agents, the controller (human or organization) defines fine-grained delegation scopes that limit what the agent can do autonomously:
| Scope Field | Type | Example |
|---|---|---|
| max_transaction_value | Option<u128> | 100 TNZO per transaction |
| max_daily_spend | Option<u128> | 1,000 TNZO per 24 hours |
| allowed_operations | Vec<String> | ["inference", "settlement", "transfer"] |
| allowed_contracts | Vec<Address> | [0xABC...DEF] (whitelist) |
| time_bound | Option<(start, end)> | Active only 9am-5pm UTC |
| allowed_payment_protocols | Vec<ProtocolId> | [Mpp, X402, VisaTap, MastercardAgentPay, Direct] |
| allowed_chains | Vec<ChainId> | [1337 (Tenzro), 1 (Ethereum)] |
Before executing any transaction, the agent runtime checks the delegation scope and rejects operations that violate constraints. Controllers can update delegation scopes at any time.
11. Agent Communication Protocols
Tenzro nodes expose two agent communication protocols for discovery, messaging, and task coordination.
11.1 A2A Protocol (Google Specification)
The Agent-to-Agent (A2A) protocol follows Google's A2A specification. It provides:
- Agent Card Discovery:
GET /.well-known/agent.jsonreturns a machine-readable Agent Card describing the agent's capabilities, skills, supported protocols, and endpoints. - JSON-RPC 2.0 Dispatcher:
POST /a2ahandles message and task operations via JSON-RPC 2.0 methods. - SSE Streaming:
POST /a2a/streamprovides Server-Sent Events for real-time task updates (progress, completion, errors).
Supported JSON-RPC Methods:
| Method | Parameters | Returns |
|---|---|---|
| message/send | to, from, content, metadata | message_id |
| tasks/send | task_type, params, callback_url | task_id, status |
| tasks/get | task_id | task_id, status, result, progress |
| tasks/list | filter (optional) | Array of tasks |
| tasks/cancel | task_id | success: bool |
Agent Card Skills: The Tenzro node's Agent Card advertises six skills: wallet (balance queries, transfers), identity (DID resolution, credential verification), inference (model discovery, request submission), settlement (payment channel management), verification (ZK/TEE proof validation), and staking (validator/provider staking operations).
11.2 MCP Server (Anthropic Specification)
The Model Context Protocol (MCP) server follows Anthropic's MCP specification. It uses the Streamable HTTP transport at the /mcp endpoint and provides 31 tools (and growing) across 7 categories — wallet and ledger, network and blocks, identity and delegation, payments, AI models and inference, cross-chain bridge, verification, staking and providers, and tokens and contracts:
| Tool | Description |
|---|---|
| get_balance | Query TNZO balance by address |
| send_transaction | Create and submit transfer transactions |
| get_block | Retrieve block by height from storage |
| get_transaction | Look up transaction by hash with status and metadata |
| get_node_status | Node health, block height, peer count, uptime |
| create_wallet | Generate new Ed25519 or Secp256k1 keypair |
| request_faucet | Request testnet TNZO tokens (rate-limited, 24h cooldown) |
| register_identity | Register human or machine DID via TDIP |
| resolve_did | Resolve DID to identity information and delegation scope |
| set_delegation_scope | Set spending limits, allowed operations, protocols, and chains for machine DID |
| create_payment_challenge | Create MPP, x402, Visa TAP, Mastercard Agent Pay, or native payment challenge |
| verify_payment | Verify payment credential and settle on-chain |
| list_payment_protocols | List supported payment protocols (MPP, x402, Visa TAP, Mastercard Agent Pay, native) |
| verify_zk_proof | Submit ZK proof for verification (Groth16, PlonK, STARK) |
| list_models | List available AI models, filter by category or name |
| chat_completion | Send chat completion request to a served model |
| list_model_endpoints | List model service endpoints with API/MCP URLs and status |
| bridge_tokens | Bridge tokens between Tenzro, Ethereum, Solana, Base via LayerZero/CCIP/deBridge |
| get_bridge_routes | Get available routes between two chains with fees and timing |
| list_bridge_adapters | List registered bridge adapters |
| stake_tokens | Stake TNZO tokens as Validator, ModelProvider, or TeeProvider |
| unstake_tokens | Unstake TNZO tokens (initiates unbonding period) |
| register_provider | Register as a provider with optional staking |
| get_provider_stats | Get provider statistics: served models, inferences, staking totals |
| set_provider_schedule | Configure provider availability schedule and pricing tiers |
| register_model_endpoint | Register a model service endpoint with API URL, MCP URL, and capabilities |
AI agents using Anthropic's Claude or other MCP-compatible models can directly interact with the Tenzro blockchain via these tools without custom integrations. The MCP server uses a tiered access model: read-only tools are publicly accessible, while write operations require an onboarding key or OAuth 2.1 JWT. Onboarding keys are issued automatically when joining the network via tenzro-cli join or the tenzro_participate RPC — they are fully decentralized credentials tied to a TDIP DID and wallet address, persisted in RocksDB, valid for both humans and autonomous agents, and recognized by any node without a central authority.
12. Skills Registry
The Tenzro Skills Registry is a decentralized, permissionless catalog of callable atomic capabilities that agents and providers can publish, discover, and invoke autonomously. Skills are the fundamental unit of capability in the agentic economy—reusable, versioned, and priced in TNZO per invocation.
12.1 Skill Anatomy
Every skill published to the registry is described by a SkillDefinition struct:
| Field | Type | Description |
|---|---|---|
| skill_id | String (UUID v4) | Unique identifier assigned at registration |
| name | String | Human-readable name (e.g., "web-search", "code-review") |
| version | String (semver) | Semantic version for API stability (e.g., "1.0.0") |
| creator_did | String (TDIP DID) | DID of the agent or human who registered this skill |
| description | String | Natural language description for agent discovery |
| input_schema | JSON Schema | Describes the expected input payload structure |
| output_schema | JSON Schema | Describes the output payload structure |
| price_per_call | u128 (atto-TNZO) | Cost per invocation (1 TNZO = 10^18 atto-TNZO) |
| tags | Vec<String> | Discoverability tags (e.g., ["search", "web", "retrieval"]) |
| required_capabilities | Vec<String> | Agent capabilities required to invoke this skill |
| endpoint | Option<String> | HTTP/RPC endpoint for remote invocation; None = local execution |
| status | Enum | Active | Inactive | Deprecated |
| invocation_count | u64 | Lifetime invocation count for popularity ranking |
| rating | u8 (0–100) | Weighted average quality rating from invokers |
12.2 Skill Lifecycle
Skills progress through three lifecycle states:
- Active: Published and available for invocation. Agents can discover and call the skill.
- Inactive: Deactivated by the creator. No new invocations accepted. Existing in-flight invocations complete normally.
- Deprecated: Superseded by a newer version. The registry records the successor skill ID. Agents are encouraged to migrate.
12.3 RPC Interface
| Method | Parameters | Returns |
|---|---|---|
| tenzro_registerSkill | SkillDefinition (minus skill_id) | { skill_id, status } |
| tenzro_listSkills | SkillFilter (optional) | Vec<SkillDefinition> |
| tenzro_searchSkills | query: String, limit: usize | Vec<SkillDefinition> (ranked by relevance) |
| tenzro_useSkill | skill_id, input_payload, payer_did | SkillInvocationResult |
12.4 Skill Economics
When an agent invokes a skill, the payment flow is:
// Payment on skill invocation
invocation_cost = skill.price_per_call // in atto-TNZO
creator_revenue = invocation_cost * 0.95 // 95% to skill creator
treasury_fee = invocation_cost * 0.05 // 5% to network treasury
// Settlement via SkillInvocationResult
{
skill_id: "uuid-of-skill",
invocation_id: "unique-invocation-uuid",
output: { /* JSON output payload */ },
settlement_tx: "0x...", // on-chain tx hash
amount_paid: 1_000_000_000_000_000_000, // 1 TNZO
completed_at: 1741200000,
}Skills are persisted to the CF_SKILLS RocksDB column family. Filters support tag, creator DID, capability requirements, maximum price, active-only, free-text search, limit, and offset.
13. Agent Templates
Agent Templates are reusable, versioned blueprints that describe how to spawn a specific type of autonomous agent. Any participant can permissionlessly publish a template to the open registry; other agents or humans can spawn instances from these templates without writing code. The network ships with reference templates covering common agentic patterns, and the registry grows as participants contribute new templates.
13.1 Template Types
| Type | Description |
|---|---|
| Assistant | General-purpose conversational or task-execution agent |
| Specialist | Domain-specific expert (e.g., legal, finance, code review) |
| Worker | Headless batch-processing agent for structured pipelines |
| Coordinator | Orchestrates other agents, delegates subtasks, aggregates results |
| Validator | Quality assurance agent that verifies outputs from other agents |
| Custom | User-defined type with arbitrary runtime requirements |
13.2 Reference Templates
The network includes 10 pre-built reference templates that cover common agentic patterns:
| Template | Type | Description |
|---|---|---|
| DeFi Trading Agent | Specialist | Automated trading across DEXs with risk management |
| Smart Contract Auditor | Specialist | Automated security analysis of smart contract code |
| Data Pipeline Processor | Worker | ETL and data transformation workflows |
| Customer Support Agent | Assistant | Conversational support with knowledge base integration |
| Content Moderation Agent | Validator | Automated content review and policy enforcement |
| Multi-Chain Portfolio Manager | Coordinator | Orchestrates portfolio rebalancing across multiple chains and DeFi protocols |
| Intelligent Payment Router | Specialist | Selects optimal payment protocol and routing path based on cost, speed, and chain availability |
| Cross-Chain Liquidity Aggregator | Custom | Autonomously sources and aggregates liquidity across bridge adapters and DEXs |
| Autonomous RWA Custodian | Custom | Manages real-world asset tokenization lifecycle with TEE-backed custody and compliance |
| Agentic Inference Marketplace | Coordinator | Discovers, benchmarks, and routes inference requests to optimal providers on behalf of other agents |
13.4 Template Fields
An AgentTemplate contains: a unique template_id, name, version, creator_did, description, template_type, capabilities (declared capabilities the spawned agent will have), runtime_requirements (minimum RAM, GPU flag, TEE flag, model requirements), a pricing_model (per-task, hourly, or flat fee), examples of inputs and expected outputs for discovery, and a status (Draft | Active | Deprecated | Archived).
When an agent is spawned from a template, an AgentTemplateInstance is created linking the template version to the spawned agent DID, the spawner DID, initialization parameters, and timestamps.
13.5 Template Discovery
Templates are filtered by type, capability, creator DID, active-only status, and free-text query. Results are ranked by invocation count (popularity) and rating. An agent orchestrator can programmatically discover the best template for a given task description using semantic search over the description and examples fields.
14. Task Marketplace
The Task Marketplace is a decentralized job board where humans and agents can post tasks, receive quotes from capable agents, accept the best quote, and settle payment on completion. Tasks are the unit of work in the agentic economy.
14.1 Task Types
| Type | Description |
|---|---|
| Inference | AI model inference request (LLM completion, image generation, embedding) |
| Compute | General computation task (data processing, transformation, analysis) |
| Research | Web search, document retrieval, information synthesis |
| CodeExecution | Sandboxed code execution with defined inputs and expected outputs |
| Verification | Quality check, fact verification, ZK/TEE proof validation |
| Coordination | Multi-agent orchestration (spawning sub-agents, aggregating results) |
| Custom | User-defined task type with custom parameters |
14.2 Task Lifecycle
- Posted: Task creator submits a
TaskInfowith description, type, priority (Low/Medium/High/Critical), budget cap, required capabilities, and deadline. - Quoted: Agents capable of fulfilling the task submit
TaskQuoteproposals containing their DID, estimated cost (atto-TNZO), estimated completion time, and a brief rationale. - Assigned: Task creator (human or orchestrator agent) selects the best quote. Payment is escrowed on-chain. The assigned agent receives the task details.
- InProgress: The assigned agent executes the task, optionally reporting progress milestones.
- Completed: Agent submits deliverables and a completion proof (ZK proof, TEE attestation, or signed output hash). Escrow releases payment to agent minus network commission.
- Failed: Agent exceeds deadline or deliverables fail verification. Escrow refunds the task creator. Agent reputation is penalized.
- Cancelled: Task creator cancels before assignment. No payment is made; posted bond is returned.
14.3 Task Economics
// Task payment flow at completion
task_payment = accepted_quote.estimated_cost
agent_revenue = task_payment * 0.99 // 99% to completing agent
network_fee = task_payment * 0.005 // 0.5% to treasury
burn = task_payment * 0.005 // 0.5% burned (deflationary)
// Priority affects gas pricing (higher priority = higher fee)
priority_multiplier = {
Low: 1.0x,
Medium: 1.1x,
High: 1.25x,
Critical: 1.5x,
}15. Agent Autonomy
Agent autonomy is the capability for an AI agent to discover all resources it needs, acquire them, pay for them, and complete work—without human intervention at any step. Tenzro provides four RPC primitives enabling this:
15.1 Autonomy Primitives
| Method | Parameters | Purpose |
|---|---|---|
| tenzro_discoverModels | category, modality, max_price, tee_required | Find available AI models matching requirements |
| tenzro_discoverAgents | capabilities, task_type, max_budget | Find available agents that can fulfill a task |
| tenzro_spawnAgentWithSkill | template_id, skill_ids, initial_balance, delegation_scope | Instantiate a new agent from a template with pre-loaded skills |
| tenzro_fundAgent | agent_did, amount_tnzo, memo | Transfer TNZO to an agent's MPC wallet for autonomous spending |
15.2 Autonomous Pipeline
A fully autonomous agent follows this pipeline without human approval at any step:
- Task Intake: Agent receives a task description via A2A
tasks/sendor MCPtools/call, or discovers a posted task in the Task Marketplace. - Model Discovery: Agent calls
tenzro_discoverModelsto find the cheapest available LLM matching its requirements (category, context length, modality). - Skill Acquisition: Agent calls
tenzro_searchSkillsto find any additional skills needed (web search, code execution, verification). Checks its delegation scope allows these expenses. - Inference Payment: Agent opens a micropayment channel with the selected model provider, submits inference requests via
tenzro_inferenceRequest, and accumulates per-token charges in the channel state. - Skill Invocation: For each required skill, agent calls
tenzro_useSkill. Payment is settled atomically via theSettlementEngine. - Result Verification: Agent verifies inference output quality. For TEE-required tasks, calls
/api/verify/tee-attestationor/api/verify/inference. - Settlement: Agent closes the micropayment channel (cooperative close), settling the final balance. Network commission (0.5%) is deducted automatically.
- Delivery: Agent submits the completed deliverable back via A2A or MCP, along with proof of work (settlement receipt, attestation).
15.3 Delegation Scope Enforcement
At every spending decision, the agent runtime checks the delegation scope set by the controller. If a payment would violate any constraint (max_transaction_value, max_daily_spend, allowed_operations, time_bound, allowed_payment_protocols, allowed_chains), the agent is blocked and the controller is notified via an A2A message. This prevents runaway spending while maintaining full autonomy within the approved envelope.
16. Adaptive Execution Runtime (RFC-0007)
RFC-0007 defines the Adaptive Execution Upgrade: a runtime layer that automatically selects the optimal execution strategy for a given model, hardware profile, and task requirements—without requiring manual configuration from either the provider or the requester.
16.1 Model Classification
| ModelClass | Typical Size | Default Execution |
|---|---|---|
| Nano | < 1B parameters | CPU inference, full precision |
| Small | 1–7B parameters | GPU inference, INT8 quantization |
| Medium | 7–30B parameters | GPU inference, 4-bit quantization |
| Large | 30–100B parameters | Multi-GPU tensor parallel |
| Frontier | > 100B parameters | Distributed pipeline parallel across nodes |
16.2 Execution Modes
| ExecutionMode | Description | Use Case |
|---|---|---|
| Local | Single node, all computation on one machine | Nano/Small models, low latency |
| Distributed | Sharded across multiple nodes in a worker pool | Large/Frontier models |
| Offloaded | Model weights partially on CPU, partially on GPU (KV cache offload) | Medium models on memory-constrained hardware |
| Speculative | Draft model + verifier model for faster generation | High-throughput chat applications |
| Batched | Multiple inference requests processed together | Embedding generation, batch analysis |
16.3 Capability Resolution
The CapabilityResolution process matches an inference request to an execution plan:
// CapabilityResolution algorithm
fn resolve(request: InferenceRequest, node: NodeNetworkProfile)
-> ExecutionPlan {
let model_class = classify_model(request.model_id); // Nano..Frontier
let kv_profile = estimate_kv_cache(request, model_class);
let mode = match (model_class, node.gpu_count) {
(Nano | Small, _) => Local,
(Medium, n) if n>0 => Local with INT4,
(Large, n) if n>=4 => Distributed across n GPUs,
(Frontier, _) => Distributed across worker pool,
_ => Offloaded (CPU+GPU hybrid),
};
let trust_profile = if request.tee_required {
TrustProfile::TeeAttested
} else {
TrustProfile::BestEffort
};
ExecutionPlan { mode, kv_profile, trust_profile, worker_roles, .. }
}The resulting ExecutionPlan is returned to the requester as part of the inference response, with an ExecutionReceipt summarizing the actual execution parameters (mode used, latency, tokens generated, cost). This receipt can be used for billing verification and performance auditing.
16.4 Worker Roles
In distributed execution mode, nodes take on specialized WorkerRole assignments:
- Prefill: Processes the prompt (context ingestion). High memory bandwidth, benefits from large KV cache.
- Decode: Generates output tokens autoregressively. Low latency critical, benefits from high GPU clock speed.
- Draft: In speculative decoding, generates candidate tokens from a small draft model.
- Verify: In speculative decoding, validates draft tokens against the target model in parallel.
- Coordinator: Routes requests, aggregates sharded KV cache, manages the worker pool lifecycle.
17. Per-Token Micropayment Settlement
Inference pricing is per-token, but on-chain transactions are expensive (gas fees). Micropayment channels enable off-chain per-token billing with on-chain settlement only at channel open and close.
17.1 Channel Lifecycle
- Open: User submits on-chain transaction to open channel with provider, depositing N TNZO (e.g., 100 TNZO). Transaction creates channel state: user address, provider address, balance (100 TNZO), nonce (0), expiration (30 days).
- Inference (off-chain): User submits inference request. Provider executes model and returns result + updated channel state: balance (99.995 TNZO after 0.005 TNZO deduction for 5 tokens at 0.001 TNZO/token), nonce (1), signed by both parties. This repeats for each request, updating balance and nonce off-chain.
- Close (cooperative): User or provider submits final signed channel state on-chain. Network verifies signatures and nonce, settles balances (provider receives 0.005 TNZO, user receives 99.995 TNZO refund), collects 0.5% commission, and closes channel.
- Close (disputed): If one party is unresponsive, the other party submits their latest signed state. Network enforces a challenge period (24 hours). If the counterparty submits a state with higher nonce, that state is used. After challenge period, channel settles based on highest-nonce state.
17.2 Network Commission Distribution
The network collects a 0.5% commission on all inference payments at settlement time. Commission distribution:
commission = gross_payment * 0.005 // 0.5%
Distribution:
40% → Network treasury (development, infrastructure, grants)
30% → Burned (deflationary pressure on TNZO supply)
30% → TNZO stakers (proportional to stake amount)
Example:
Gross payment: 100 TNZO
Commission: 0.5 TNZO
Treasury: 0.2 TNZO
Burned: 0.15 TNZO
Stakers: 0.15 TNZO
Provider net: 99.5 TNZO17.3 Channel State Management
Micropayment channel state requires persistent storage for production deployment. The architecture supports persistent channel state in dedicated storage with automatic recovery after node restart.
Dispute Resolution: The challenge period mechanism allows counterparties to submit higher-nonce states during channel closure. Watchtower services can monitor channels and automatically submit the latest state to prevent fraud.
18. ERC-7802 Cross-Chain Token Standard
Tenzro implements ERC-7802 (SuperchainERC20) to provide a standardized, canonical interface for cross-chain token supply management. Unlike traditional lock-and-mint bridge designs that fragment liquidity and introduce bridge-specific trust assumptions, ERC-7802 defines two core functions that authorized bridge adapters invoke to move tokens across chains while preserving a global supply invariant.
18.1 Core Interface
The ERC-7802 interface defines two functions for cross-chain supply management:
| Function | Parameters | Description |
|---|---|---|
| crosschainMint | tokenId, recipient, amount, sourceChain | Mints tokens on the destination chain after verifying a valid burn on the source chain. Only callable by authorized bridge adapters. |
| crosschainBurn | tokenId, amount, destinationChain | Burns tokens on the source chain, initiating a cross-chain transfer to the destination chain. |
18.2 Cross-Chain Supply Invariant
The fundamental guarantee of ERC-7802 is that total token supply across all chains remains constant. For every crosschainBurn on a source chain, a corresponding crosschainMint of equal amount occurs on the destination chain. The network tracks per-chain supply in real time, enabling visibility into token distribution across all connected chains.
// Cross-chain supply invariant
sum(supply[chain]) for all chains == TOTAL_SUPPLY // constant
// Per-chain supply tracking
supply_on_chain = initial_supply + total_minted - total_burned
// Bridge adapter authorization
authorized_adapters: [LayerZero, CCIP, deBridge, Canton]18.3 Integration with Sei V2 Pointer Model
ERC-7802 operates in conjunction with the Sei V2 pointer model already used for intra-network cross-VM token representation. Within the Tenzro Network, TNZO exists as wTNZO (ERC-20 pointer) on EVM, an SPL Token adapter on SVM, and a CIP-56 holding contract on Canton—all backed by the same native balance with no bridge risk or liquidity fragmentation. ERC-7802 extends this architecture to external chains, creating a unified token layer that spans both internal VMs and external L1/L2 networks.
The actual cross-chain message transport is handled by the existing bridge adapters (LayerZero V2, Chainlink CCIP, deBridge DLN, and Canton). ERC-7802 provides the standardized token-side interface that these adapters call into, decoupling the supply management logic from the transport mechanism.
18.4 SDK Access
Both the Rust and TypeScript SDKs expose ERC-7802 operations through a dedicated namespace:
// Rust SDK
client.erc7802().crosschain_mint(token_id, recipient, amount, source_chain).await?;
client.erc7802().crosschain_burn(token_id, amount, destination_chain).await?;
client.erc7802().get_cross_chain_supply(token_id).await?;
// TypeScript SDK
await client.erc7802().crosschainMint(tokenId, recipient, amount, sourceChain);
await client.erc7802().crosschainBurn(tokenId, amount, destinationChain);
await client.erc7802().getCrossChainSupply(tokenId);19. Implementation Roadmap
The Tenzro Network implementation follows a phased development approach, prioritizing core infrastructure before advancing to application-layer features and production hardening.
19.1 Network and Consensus Infrastructure
- Peer authentication with validator set verification
- Message deduplication in gossip layer
- Equivocation detection and automatic slashing
- Atomic epoch transitions with validator set updates
- Mempool management with size limits and transaction eviction
19.2 AI Infrastructure Development
- Model downloading with integrity verification
- Inference routing to model providers
- Agent messaging over network transport
- Chat interface for inference interaction
- Provider health monitoring and circuit breaker integration
19.3 Security and Production Hardening
- TEE hardware attestation integration with certificate chain validation
- Payment credential verification and settlement transaction submission
- Rate limiting on agent message queues
- Time-based decay for provider reputation metrics
- Comprehensive test suite and external security audit
20. Conclusion
Tenzro Network is the operating system for the AI economy — not a blockchain that also does AI, but an AI-native economic system purpose-built for the agentic era. By combining a decentralized AI marketplace, TEE-protected inference, intelligent routing, dynamic pricing, autonomous agent infrastructure, and per-token micropayment settlement, Tenzro provides the foundational protocol where agents are first-class economic actors.
Model providers can permissionlessly register and monetize their models without centralized gatekeepers. Users can discover and access models from multiple providers through a unified interface, paying only for tokens consumed. Agents can autonomously discover models, negotiate pricing, verify results, and settle payments without human intervention. TEE providers earn fees for hardware-rooted trust services, creating a marketplace for confidential computing.
The network's 0.5% commission on inference and TEE payments flows to the treasury (40%), burning (30%), and stakers (30%), aligning incentives across all participants. Providers stake TNZO as collateral, subject to slashing for misbehavior, ensuring economic alignment and service quality.
The core infrastructure is testnet-ready with 233+ RPC methods, 9+ MCP servers, and an extensible ecosystem of skills, tools, and agent templates — all open and permissionless. Remaining work focuses on bridge interoperability, production hardening, and external security audit before mainnet launch.
Tenzro Network is designed for a present — and a future — where AI agents are first-class economic participants, conducting financial transactions, accessing intelligence, and coordinating autonomously. The network is live on testnet with 9+ chain integrations and growing, and we invite developers, model providers, and AI agents to participate, provide feedback, and help build the operating system for the AI economy.
21. References
- Tenzro Protocol: Vision and Ecosystem Overview
- Tenzro Ledger: Settlement Layer
- TNZO Tokenomics: Utility and Governance
- TEE Security: Hardware-Rooted Trust
- Payment Protocols for the AI Economy
- TDIP: Decentralized Identity Protocol
- Zero-Knowledge Proofs: Verifiable Computation
- GitHub Repository
Disclaimer: This whitepaper describes the technical architecture for Tenzro Network as of April 2026. The project is in active development. Implementation details, timelines, and features are subject to change. TNZO is a utility and governance token used for transaction fees, service payments, staking, and governance—it is not a security token or investment contract. The network is live on testnet for development and testing purposes only. Testnet tokens have no monetary value. This document is for informational purposes only and does not constitute financial, legal, or investment advice.