Tenzro Testnet is live —request testnet TNZO

Models

The model system provides infrastructure for discovering, registering, and managing AI models across the Tenzro Network. It implements a decentralized model registry where providers can advertise their models, and users can discover available intelligence services with dynamic pricing and health monitoring.

Model Registry

The ModelRegistry is a thread-safe catalog of all available AI models on the network. Models are organized by ModelModality and can be filtered by modality.

use tenzro_model::registry::ModelRegistry;
use tenzro_types::{ModelInfo, ModelModality, Hash};

// Create a new model registry
let registry = ModelRegistry::new();

// Register a new model
let model = ModelInfo {
    id: Hash::from_bytes([0u8; 32].as_ref()),
    name: "gemma3-270m".to_string(),
    modality: ModelModality::Text,
    version: "1.0".to_string(),
    description: "Compact instruction-tuned language model".to_string(),
    provider_id: provider_address.clone(),
    metadata: HashMap::new(),
};

registry.register_model(model.clone())?;

// List all models, or filter by modality
let all = registry.list_models();
let text_models = registry.get_models_by_modality(ModelModality::Text);
let image_models = registry.get_models_by_modality(ModelModality::Image);

Models are stored in a concurrent data structure indexed by model ID. Each model includes metadata such as name, version, modality, and provider information.

Model Modalities

Tenzro organizes models by modality to enable efficient discovery. The ModelModality enum describes the input/output domain a model serves.

// ModelModality (defined in tenzro-types)
pub enum ModelModality {
    Text,        // Chat / language models
    Image,       // Vision encoders, segmentation, detection
    Audio,       // ASR, audio embedding
    Video,       // Video embedding
    Timeseries,  // Forecasting (Chronos, TimesFM, Granite-TTM)
    TextImage,   // Multimodal (text + image)
    TextAudio,   // Multimodal (text + audio)
    Multimodal,  // Multiple modalities
}

// Filter by modality
let text_models = registry.get_models_by_modality(ModelModality::Text);
let image_models = registry.get_models_by_modality(ModelModality::Image);
let forecast_models = registry.get_models_by_modality(ModelModality::Timeseries);

// Or list everything
let all = registry.list_models();

Filtering by modality enables applications to discover the right model for their use case. A chat application calls get_models_by_modality(ModelModality::Text); a forecasting agent calls get_models_by_modality(ModelModality::Timeseries).

Multi-Modal Catalogs

In addition to GGUF chat models, tenzro-model ships seven verified ONNX catalogs covering forecasting, vision, text embeddings, segmentation, object detection, audio (ASR), and video. Each catalog is exposed at runtime via a typed entry struct and a JSON-RPC tenzro_list*Catalog method.

CatalogEntry structList RPCSample models
ForecastOnnxForecastEntrytenzro_listForecastCatalogChronos-2 (Apache-2.0, 120M, multivariate + covariates), Chronos-Bolt small/base, TimesFM 2.5 200M (16k context), Granite-TTM-r2
VisionOnnxVisionEntrytenzro_listVisionCatalogCLIP ViT-B/32 + L/14 (MIT), SigLIP2 base/large/so400m (Apache-2.0), DINOv3 vits16/vitb16/vitl16 (custom DINOv3 License), DINOv2 small/base/large
Text embeddingOnnxTextEmbeddingEntrytenzro_listTextEmbeddingCatalogQwen3-Embedding 0.6B/4B/8B (Apache-2.0, #1 MTEB multilingual June 2025), EmbeddingGemma-300M (Gemma terms, Matryoshka 768/512/256/128), BGE-M3 (MIT), Snowflake Arctic Embed L v2.0 (Apache-2.0)
SegmentationOnnxSegmentationEntrytenzro_listSegmentationCatalogSAM 3 / 3.1 (custom SAM License with ITAR/military restrictions), SAM 2 base/large (Apache-2.0), EdgeSAM (MIT), MobileSAM (Apache-2.0)
DetectionOnnxDetectionEntrytenzro_listDetectionCatalogRF-DETR nano/small/medium/base/large/2xl (Apache-2.0, ICLR 2026, first real-time detector >60 AP COCO), D-FINE n/s/m/l/x (Apache-2.0)
Audio (ASR)OnnxAudioEntrytenzro_listAudioCatalogMoonshine v2 tiny/base (MIT, Useful Sensors), Distil-Whisper small.en/medium.en/large-v3 (MIT), Whisper-large-v3-turbo (MIT), Parakeet-TDT-0.6B-v3 (NVIDIA CC-BY-4.0), Canary-1B-Flash (NVIDIA CC-BY-4.0)
VideoOnnxVideoEntrytenzro_listVideoCatalogEmpty in wave 1 — no permissive ONNX-shippable encoder-only video model exists in the 2026 OSS landscape; runtime scaffolding ships ready for future entries

License-Tier Gating

Each catalog entry carries a license_tier enforced centrally by ModelRegistry::register_model():

  • Permissive (Apache-2.0, MIT, BSD) — load by default, no flag required.
  • Attribution (CC-BY-4.0 — Parakeet, Canary) — log attribution at first load.
  • CommercialCustom (DINOv3 License, SAM License, Gemma terms) — require explicit --accept-license <id> per family.
  • NonCommercial (CC-BY-NC, OpenRAIL-M) — refuse to load without --accept-non-commercial.

ONNX Vision Catalog

In addition to GGUF chat models, tenzro-model ships a verified catalog of ONNX image encoders for embedding-based workloads (image-text retrieval, image similarity, multi-modal indexing). All seven entries are ungated and directly downloadable from HuggingFace. The catalog is exposed at runtime via get_vision_catalog() and get_vision_model_by_id(), and over JSON-RPC as tenzro_listVisionCatalog.

Catalog IDHuggingFace RepoInputEmbedding DimLicense
clip-vit-b32Xenova/clip-vit-base-patch32224×224512MIT
clip-vit-l14Xenova/clip-vit-large-patch14224×224768MIT
siglip-base-224Xenova/siglip-base-patch16-224224×224768Apache-2.0
siglip2-base-224onnx-community/siglip2-base-patch16-224-ONNX224×224768Apache-2.0
dinov2-smallXenova/dinov2-small224×224384Apache-2.0
dinov2-baseXenova/dinov2-base224×224768Apache-2.0
dinov2-largeXenova/dinov2-large224×2241024Apache-2.0

The runtime is built on ORT and the image crate, with Lanczos3 resize and three normalization presets (clip, imagenet, siglip) covering all catalog entries. Loading a catalog model takes a single RPC call:

// Catalog shortcut — runtime resolves repo, input_size, embedding_dim, normalization
{
  "jsonrpc": "2.0",
  "method": "tenzro_loadVisionModel",
  "params": [{ "catalog_id": "clip-vit-b32" }],
  "id": 1
}

// Then encode any PNG/JPEG/WebP
{
  "jsonrpc": "2.0",
  "method": "tenzro_imageEmbed",
  "params": [{
    "model_id": "clip-vit-b32",
    "image_base64": "iVBORw0KGgo...",
    "normalize": true
  }],
  "id": 1
}

See the API Reference for the full vision RPC namespace, and the Inference page for the relationship between embedding-based vision models and image-input chat models (the image content block on tenzro_chat).

Timeseries Foundation Models

For univariate forecasting, the node hosts an ONNX-backed TimeseriesRuntime that runs Chronos-Bolt, Granite-TTM, and TimesFM 2.5 with a unified input/output contract: input shape [1, context_len], output shape [1, H] for point forecasts or [1, H, Q] when the model emits per-step quantiles. Models are loaded by HuggingFace repo + filename via tenzro_loadForecastModel and invoked with tenzro_forecast. Source weights are exported to ONNX via the tools/ts-export/ Python harness; published exports are served from the tenzro/timeseries-onnx HuggingFace org.

Provider Management

The ProviderManager tracks model providers with health monitoring, reputation scoring, and circuit breaker patterns for fault tolerance.

use tenzro_model::provider::{
    ProviderManager, ProviderInfo, ProviderStatus,
};

// Create provider manager
let provider_mgr = ProviderManager::new();

// Register a model provider
let provider = ProviderInfo {
    id: provider_address.clone(),
    endpoint: "https://inference.example.com".to_string(),
    supported_models: vec![model_id.clone()],
    status: ProviderStatus::Active,
    reputation: 100.0,
    total_requests: 0,
    successful_requests: 0,
    average_latency_ms: 0,
    metadata: HashMap::new(),
};

provider_mgr.register(provider)?;

// Get providers for a specific model
let providers = provider_mgr.get_providers_for_model(&model_id);
for p in providers {
    println!("Provider: {} (rep: {})", p.id, p.reputation);
}

Providers maintain statistics including total requests, successful requests, average latency, and reputation score. The reputation system enables the inference router to prefer high-quality providers.

Health Monitoring

Provider health monitoring tracks request success rates and latency. Providers that consistently fail requests are automatically marked inactive via circuit breaker logic.

use tenzro_model::provider::CircuitBreaker;

// Circuit breaker protects against failing providers
let circuit_breaker = CircuitBreaker::new(
    5,    // failure threshold
    60,   // timeout seconds
);

// Record successful request
circuit_breaker.record_success();

// Record failed request
circuit_breaker.record_failure();

// Check if circuit is open (provider unavailable)
if circuit_breaker.is_open() {
    println!("Provider circuit open, routing to backup");
}

// Get provider health status
let status = provider_mgr.get_status(&provider_id)?;
match status {
    ProviderStatus::Active => println!("Healthy"),
    ProviderStatus::Inactive => println!("Circuit open"),
    ProviderStatus::Suspended => println!("Suspended"),
}

Note: The circuit breaker methods record_failure() and record_success() exist but are never invoked by the inference router. Health monitoring is not yet wired up.

Dynamic Pricing

The PricingEngine calculates inference costs based on model type, input/output token counts, and provider-specific pricing. Costs are denominated in TNZO.

use tenzro_model::pricing::{
    PricingEngine, PricingTier, CostCalculation,
};

// Create pricing engine
let pricing = PricingEngine::new();

// Set model pricing (TNZO per 1M tokens)
pricing.set_price(
    &model_id,
    &provider_id,
    15_000_000,  // 15 TNZO per 1M input tokens
    60_000_000,  // 60 TNZO per 1M output tokens
)?;

// Calculate cost for inference request
let cost = pricing.calculate_cost(
    &model_id,
    &provider_id,
    1500,  // input tokens
    500,   // output tokens
)?;

println!("Input cost: {} TNZO", cost.input_cost);
println!("Output cost: {} TNZO", cost.output_cost);
println!("Total: {} TNZO", cost.total_cost);

Pricing is per-provider, allowing market-driven rates. Providers can set different prices for the same model, enabling competition. The inference router can select providers based on price as one of its routing strategies.

Pricing Tiers

Providers can offer tiered pricing based on commitment levels, enabling discounts for high-volume users or subscriptions.

// Pricing tiers
pub enum PricingTier {
    PayAsYouGo,      // Standard per-token pricing
    Monthly,         // Monthly subscription with discount
    Enterprise,      // Custom enterprise pricing
}

// Set tier-based pricing
pricing.set_tier_price(
    &model_id,
    &provider_id,
    PricingTier::Monthly,
    12_000_000,  // 20% discount on input
    48_000_000,  // 20% discount on output
)?;

// Calculate with tier
let discounted = pricing.calculate_cost_with_tier(
    &model_id,
    &provider_id,
    1500,
    500,
    PricingTier::Monthly,
)?;

Tier-based pricing integrates with the identity system. Users with monthly subscriptions stored in their TDIP identity credentials automatically receive discounted rates.

Model Download Manager

The DownloadManager tracks model download progress when providers pull model weights to their local infrastructure. It supports pause/resume and progress tracking.

use tenzro_model::download::{
    DownloadManager, DownloadTask, DownloadStatus,
};

// Create download manager
let downloads = DownloadManager::new();

// Start downloading a model
let task_id = downloads.start_download(
    model_id.clone(),
    "https://models.tenzro.network/claude-4.weights".to_string(),
    "/var/tenzro/models/".to_string(),
)?;

// Check progress
let task = downloads.get_task(&task_id)?;
match task.status {
    DownloadStatus::Pending => println!("Queued"),
    DownloadStatus::InProgress => {
        println!("Progress: {}%", task.progress);
        println!("Downloaded: {} / {} bytes",
            task.downloaded_bytes, task.total_bytes);
    }
    DownloadStatus::Completed => println!("Complete"),
    DownloadStatus::Failed => println!("Error: {}", task.error),
}

// Pause and resume
downloads.pause(&task_id)?;
downloads.resume(&task_id)?;

// Cancel download
downloads.cancel(&task_id)?;

Note: The current download implementation is a stub. start_download() marks the task as started but performs no actual download.

Model Integrity Verification

Models include cryptographic hashes to ensure integrity. Providers verify model weights match the registered hash before serving inference requests.

use tenzro_model::verification::verify_model_hash;
use tenzro_crypto::hashing::sha256;

// Model includes expected hash
let model = ModelInfo {
    id: Hash::from_bytes([0x42; 32].as_ref()),
    name: "GPT-5".to_string(),
    // ... other fields ...
    metadata: {
        let mut m = HashMap::new();
        m.insert("weights_hash".to_string(),
            "abcd1234...".to_string());
        m
    },
};

// Provider verifies downloaded weights
let weights_path = "/var/tenzro/models/gpt5.weights";
let actual_hash = sha256(&std::fs::read(weights_path)?);

let expected = model.metadata.get("weights_hash").unwrap();
assert_eq!(
    hex::encode(actual_hash.as_bytes()),
    expected,
    "Model integrity check failed!"
);

Note: The verify_model_hash() function exists but is never called during registration or download. Model integrity verification is not enforced.

Model Registration Flow

Providers register models by submitting a registration transaction to the Tenzro Ledger. The model registry maintains both on-chain (permanent) and off-chain (performance) storage.

// Full model registration flow

// 1. Provider creates model info
let model = ModelInfo {
    id: Hash::random(),
    name: "Gemma-4-9B".to_string(),
    modality: ModelModality::Text,
    version: "4.0".to_string(),
    description: "Open-source language model".to_string(),
    provider_id: provider_address.clone(),
    metadata: HashMap::from([
        ("context_length".to_string(), "8192".to_string()),
        ("parameters".to_string(), "70B".to_string()),
        ("weights_hash".to_string(), "abc123...".to_string()),
    ]),
};

// 2. Submit registration transaction
let tx = Transaction::new(
    TransactionType::ModelRegistration {
        model: model.clone(),
    },
    provider_address.clone(),
);

// 3. Transaction is validated and included in block
// 4. Registry indexes the model for fast lookup
registry.register_model(model)?;

// 5. Model is now discoverable
let text_models = registry.get_models_by_modality(ModelModality::Text);

On-chain registration provides permanence and censorship resistance. The off-chain registry cache provides fast lookups for inference routing without blockchain round-trips.

Gossipsub Model Discovery

New model registrations are propagated via the tenzro/models/1.0.0 gossipsub topic. Nodes subscribe to this topic to maintain up-to-date model catalogs.

// Model registration gossip message
{
  "type": "model_registered",
  "model_id": "0x1234...",
  "name": "Gemma-4-9B",
  "modality": "Text",
  "provider_id": "0x5678...",
  "block_height": 12345,
  "timestamp": 1711234567
}

Gossipsub propagation enables near-instant discovery of new models across the network. Nodes validate the registration transaction before updating their local registry to prevent spam.

Provider Onboarding

Becoming a model provider requires registering an identity, staking TNZO, and advertising model availability. The process integrates with TDIP identity and the staking system.

// Provider onboarding flow

// 1. Register provider identity (TDIP autonomous machine)
let result = identity_registry.register_autonomous_machine_with_fee(
    public_key,
    vec!["inference".to_string()],
).await?;
let provider_did = result.identity.did();

// 2. Stake TNZO for provider role
let stake_amount = 10_000 * 10u64.pow(18);  // 10k TNZO
staking_manager.stake(
    provider_address.clone(),
    stake_amount,
    StakingRole::ModelProvider,
)?;

// 3. Register provider info
let provider_info = ProviderInfo {
    id: provider_address.clone(),
    endpoint: "https://inference.example.com".to_string(),
    supported_models: vec![],  // Initially empty
    status: ProviderStatus::Active,
    reputation: 100.0,
    // ... metrics ...
};

provider_mgr.register(provider_info)?;

// 4. Register models (as shown earlier)
// 5. Start serving inference requests

Staking aligns incentives by requiring providers to lock capital. Misbehaving providers (serving incorrect outputs, downtime) can be slashed, reducing their stake and reputation.

Note: Provider registration has no access control. Anyone can register fake providers without staking or identity verification.

Provider SDK Operations

The Rust and TypeScript SDKs provide a ProviderClient for managing provider schedules, pricing, model endpoints, and lightweight node participation.

Provider Schedule

Configure when your provider node is available to serve inference requests. This is useful for providers who want to limit serving to specific hours or days.

// Rust - Set and retrieve provider schedule
let schedule = serde_json::json!({
    "enabled": true,
    "start_hour": 8,
    "end_hour": 22,
    "days_of_week": [true, true, true, true, true, false, false],
    "timezone": "UTC"
});
client.provider().set_provider_schedule(schedule).await?;
let current = client.provider().get_provider_schedule().await?;

Provider Pricing

Set per-token pricing for your provider. Pricing is denominated in TNZO and supports separate input and output token rates.

// Rust - Set provider pricing
let pricing = serde_json::json!({
    "input_price_per_token": 0.0001,
    "output_price_per_token": 0.0002,
    "currency": "TNZO"
});
client.provider().set_provider_pricing(pricing).await?;

Model Endpoint Management

Register, query, and unregister model endpoints. Each endpoint represents a specific model instance served by a provider, with optional MCP server URL for tool discovery.

// Rust - Register a remote model endpoint
let result = client.provider().register_model_endpoint(
    "gemma4-9b",
    "https://my-provider.com/v1/chat",
    Some("https://my-provider.com/mcp"),
    Some("Gemma 4 9B"),
    Some("MyProvider"),
).await?;

// Get a specific endpoint
let ep = client.provider().get_model_endpoint("instance-123").await?;

// Unregister
client.provider().unregister_model_endpoint("instance-123").await?;

Micro Node

Join the network as a lightweight participant. Micro nodes contribute to network liveness and peer discovery without running a full validator or serving models.

// Rust - Join as a lightweight participant
let result = client.provider().join_as_micro_node(
    Some("My Light Node"),
    Some("human"),
).await?;

Integration with Inference Router

The model registry and provider manager feed the inference router, which selects the optimal provider for each request based on configurable strategies.

use tenzro_model::router::{
    InferenceRouter, RoutingStrategy,
};

// Create router with model registry and provider manager
let router = InferenceRouter::new(
    registry.clone(),
    provider_mgr.clone(),
);

// Route request using price strategy
let provider = router.select_provider(
    &model_id,
    RoutingStrategy::Cheapest,
)?;

println!("Selected provider: {}", provider.id);
println!("Endpoint: {}", provider.endpoint);

The router abstracts provider selection from the application. Users request inference for a model ID, and the router handles provider discovery, selection, and failover. See the Inference page for details on routing strategies.

RPC Integration

The tenzro-node RPC server exposes model registry functions via JSON-RPC. Applications query models and providers without running a full node.

// RPC methods for model management

// List all models
curl -X POST https://rpc.tenzro.network \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tenzro_listModels",
    "params": [],
    "id": 1
  }'

// Get a specific model endpoint (live API URL, status, provider)
curl -X POST https://rpc.tenzro.network \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tenzro_getModelEndpoint",
    "params": ["qwen3-0.6b"],
    "id": 1
  }'

// Register a model endpoint (requires provider auth)
curl -X POST https://rpc.tenzro.network \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tenzro_registerModelEndpoint",
    "params": [{
      "model_id":   "qwen3-0.6b",
      "api_url":    "https://provider.example.com/v1",
      "mcp_url":    "https://provider.example.com/mcp",
      "provider":   "0xabc123..."
    }],
    "id": 1
  }'

// Provider-side: serve a model (loads it and registers the endpoint atomically)
curl -X POST https://rpc.tenzro.network \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tenzro_serveModel",
    "params": [{ "model_id": "qwen3-0.6b" }],
    "id": 1
  }'

The MCP server on port 3001 also exposes a list_models tool, enabling AI agents to discover available models programmatically.