Models
The model system provides infrastructure for discovering, registering, and managing AI models across the Tenzro Network. It implements a decentralized model registry where providers can advertise their models, and users can discover available intelligence services with dynamic pricing and health monitoring.
Model Registry
The ModelRegistry is a thread-safe catalog of all available AI models on the network. Models are categorized by type (Language, Vision, Audio, Multimodal) and can be filtered by category and modality.
use tenzro_model::{
registry::ModelRegistry,
types::{ModelInfo, ModelCategory, Modality},
};
use tenzro_types::Hash;
// Create a new model registry
let registry = ModelRegistry::new();
// Register a new model
let model = ModelInfo {
id: Hash::from_bytes([0u8; 32].as_ref()),
name: "Claude-4-Sonnet".to_string(),
category: ModelCategory::Language,
modality: Modality::TextToText,
version: "4.5".to_string(),
description: "Frontier language model".to_string(),
provider_id: provider_address.clone(),
metadata: HashMap::new(),
};
registry.register(model.clone())?;
// Retrieve model by ID
let retrieved = registry.get(&model.id)?;
println!("Model: {} v{}", retrieved.name, retrieved.version);
Models are stored in a concurrent data structure indexed by model ID. Each model includes metadata such as name, version, category (Language, Vision, Audio, Multimodal), modality (TextToText, TextToImage, ImageToText, etc.), and provider information.
Model Categories and Modalities
Tenzro organizes models by category and modality to enable efficient discovery. Categories represent the broad domain, while modalities describe the input/output types.
// Model categories
pub enum ModelCategory {
Language, // LLMs, text generation
Vision, // Image recognition, object detection
Audio, // Speech-to-text, text-to-speech
Multimodal, // Vision + language
}
// Modalities (input → output)
pub enum Modality {
TextToText, // GPT, Claude
TextToImage, // DALL-E, Stable Diffusion
ImageToText, // OCR, image captioning
AudioToText, // Whisper
TextToAudio, // TTS models
MultimodalInput, // Image + text → text
}
// Filter models by category
let language_models = registry.list_by_category(
ModelCategory::Language
);
// Filter by modality
let text_to_image = registry.list_by_modality(
Modality::TextToImage
);
Filtering by category and modality enables applications to discover the right model for their use case. For example, a chat application would filter for ModelCategory::Language with Modality::TextToText.
Note: The current modality filter uses exact matching, which doesn't support multimodal models that accept multiple input types.
Provider Management
The ProviderManager tracks model providers with health monitoring, reputation scoring, and circuit breaker patterns for fault tolerance.
use tenzro_model::provider::{
ProviderManager, ProviderInfo, ProviderStatus,
};
// Create provider manager
let provider_mgr = ProviderManager::new();
// Register a model provider
let provider = ProviderInfo {
id: provider_address.clone(),
endpoint: "https://inference.example.com".to_string(),
supported_models: vec![model_id.clone()],
status: ProviderStatus::Active,
reputation: 100.0,
total_requests: 0,
successful_requests: 0,
average_latency_ms: 0,
metadata: HashMap::new(),
};
provider_mgr.register(provider)?;
// Get providers for a specific model
let providers = provider_mgr.get_providers_for_model(&model_id);
for p in providers {
println!("Provider: {} (rep: {})", p.id, p.reputation);
}
Providers maintain statistics including total requests, successful requests, average latency, and reputation score. The reputation system enables the inference router to prefer high-quality providers.
Health Monitoring
Provider health monitoring tracks request success rates and latency. Providers that consistently fail requests are automatically marked inactive via circuit breaker logic.
use tenzro_model::provider::CircuitBreaker;
// Circuit breaker protects against failing providers
let circuit_breaker = CircuitBreaker::new(
5, // failure threshold
60, // timeout seconds
);
// Record successful request
circuit_breaker.record_success();
// Record failed request
circuit_breaker.record_failure();
// Check if circuit is open (provider unavailable)
if circuit_breaker.is_open() {
println!("Provider circuit open, routing to backup");
}
// Get provider health status
let status = provider_mgr.get_status(&provider_id)?;
match status {
ProviderStatus::Active => println!("Healthy"),
ProviderStatus::Inactive => println!("Circuit open"),
ProviderStatus::Suspended => println!("Suspended"),
}
Note: The circuit breaker methods record_failure() and record_success() exist but are never invoked by the inference router. Health monitoring is not yet wired up.
Dynamic Pricing
The PricingEngine calculates inference costs based on model type, input/output token counts, and provider-specific pricing. Costs are denominated in TNZO.
use tenzro_model::pricing::{
PricingEngine, PricingTier, CostCalculation,
};
// Create pricing engine
let pricing = PricingEngine::new();
// Set model pricing (TNZO per 1M tokens)
pricing.set_price(
&model_id,
&provider_id,
15_000_000, // 15 TNZO per 1M input tokens
60_000_000, // 60 TNZO per 1M output tokens
)?;
// Calculate cost for inference request
let cost = pricing.calculate_cost(
&model_id,
&provider_id,
1500, // input tokens
500, // output tokens
)?;
println!("Input cost: {} TNZO", cost.input_cost);
println!("Output cost: {} TNZO", cost.output_cost);
println!("Total: {} TNZO", cost.total_cost);
Pricing is per-provider, allowing market-driven rates. Providers can set different prices for the same model, enabling competition. The inference router can select providers based on price as one of its routing strategies.
Pricing Tiers
Providers can offer tiered pricing based on commitment levels, enabling discounts for high-volume users or subscriptions.
// Pricing tiers
pub enum PricingTier {
PayAsYouGo, // Standard per-token pricing
Monthly, // Monthly subscription with discount
Enterprise, // Custom enterprise pricing
}
// Set tier-based pricing
pricing.set_tier_price(
&model_id,
&provider_id,
PricingTier::Monthly,
12_000_000, // 20% discount on input
48_000_000, // 20% discount on output
)?;
// Calculate with tier
let discounted = pricing.calculate_cost_with_tier(
&model_id,
&provider_id,
1500,
500,
PricingTier::Monthly,
)?;
Tier-based pricing integrates with the identity system. Users with monthly subscriptions stored in their TDIP identity credentials automatically receive discounted rates.
Model Download Manager
The DownloadManager tracks model download progress when providers pull model weights to their local infrastructure. It supports pause/resume and progress tracking.
use tenzro_model::download::{
DownloadManager, DownloadTask, DownloadStatus,
};
// Create download manager
let downloads = DownloadManager::new();
// Start downloading a model
let task_id = downloads.start_download(
model_id.clone(),
"https://models.tenzro.network/claude-4.weights".to_string(),
"/var/tenzro/models/".to_string(),
)?;
// Check progress
let task = downloads.get_task(&task_id)?;
match task.status {
DownloadStatus::Pending => println!("Queued"),
DownloadStatus::InProgress => {
println!("Progress: {}%", task.progress);
println!("Downloaded: {} / {} bytes",
task.downloaded_bytes, task.total_bytes);
}
DownloadStatus::Completed => println!("Complete"),
DownloadStatus::Failed => println!("Error: {}", task.error),
}
// Pause and resume
downloads.pause(&task_id)?;
downloads.resume(&task_id)?;
// Cancel download
downloads.cancel(&task_id)?;
Note: The current download implementation is a stub. start_download() marks the task as started but performs no actual download.
Model Integrity Verification
Models include cryptographic hashes to ensure integrity. Providers verify model weights match the registered hash before serving inference requests.
use tenzro_model::verification::verify_model_hash;
use tenzro_crypto::hashing::sha256;
// Model includes expected hash
let model = ModelInfo {
id: Hash::from_bytes([0x42; 32].as_ref()),
name: "GPT-5".to_string(),
// ... other fields ...
metadata: {
let mut m = HashMap::new();
m.insert("weights_hash".to_string(),
"abcd1234...".to_string());
m
},
};
// Provider verifies downloaded weights
let weights_path = "/var/tenzro/models/gpt5.weights";
let actual_hash = sha256(&std::fs::read(weights_path)?);
let expected = model.metadata.get("weights_hash").unwrap();
assert_eq!(
hex::encode(actual_hash.as_bytes()),
expected,
"Model integrity check failed!"
);
Note: The verify_model_hash() function exists but is never called during registration or download. Model integrity verification is not enforced.
Model Registration Flow
Providers register models by submitting a registration transaction to the Tenzro Ledger. The model registry maintains both on-chain (permanent) and off-chain (performance) storage.
// Full model registration flow
// 1. Provider creates model info
let model = ModelInfo {
id: Hash::random(),
name: "Llama-4-70B".to_string(),
category: ModelCategory::Language,
modality: Modality::TextToText,
version: "4.0".to_string(),
description: "Open-source language model".to_string(),
provider_id: provider_address.clone(),
metadata: HashMap::from([
("context_length".to_string(), "8192".to_string()),
("parameters".to_string(), "70B".to_string()),
("weights_hash".to_string(), "abc123...".to_string()),
]),
};
// 2. Submit registration transaction
let tx = Transaction::new(
TransactionType::ModelRegistration {
model: model.clone(),
},
provider_address.clone(),
);
// 3. Transaction is validated and included in block
// 4. Registry indexes the model for fast lookup
registry.register(model)?;
// 5. Model is now discoverable
let all_llms = registry.list_by_category(
ModelCategory::Language
);
On-chain registration provides permanence and censorship resistance. The off-chain registry cache provides fast lookups for inference routing without blockchain round-trips.
Gossipsub Model Discovery
New model registrations are propagated via the tenzro/models/1.0.0 gossipsub topic. Nodes subscribe to this topic to maintain up-to-date model catalogs.
// Model registration gossip message
{
"type": "model_registered",
"model_id": "0x1234...",
"name": "Llama-4-70B",
"category": "Language",
"modality": "TextToText",
"provider_id": "0x5678...",
"block_height": 12345,
"timestamp": 1711234567
}
Gossipsub propagation enables near-instant discovery of new models across the network. Nodes validate the registration transaction before updating their local registry to prevent spam.
Provider Onboarding
Becoming a model provider requires registering an identity, staking TNZO, and advertising model availability. The process integrates with TDIP identity and the staking system.
// Provider onboarding flow
// 1. Register provider identity (TDIP machine identity)
let provider_did = identity_registry.register_machine(
"InferenceProvider".to_string(),
vec!["inference".to_string()],
None, // autonomous (no controller)
)?;
// 2. Stake TNZO for provider role
let stake_amount = 10_000 * 10u64.pow(18); // 10k TNZO
staking_manager.stake(
provider_address.clone(),
stake_amount,
StakingRole::ModelProvider,
)?;
// 3. Register provider info
let provider_info = ProviderInfo {
id: provider_address.clone(),
endpoint: "https://inference.example.com".to_string(),
supported_models: vec![], // Initially empty
status: ProviderStatus::Active,
reputation: 100.0,
// ... metrics ...
};
provider_mgr.register(provider_info)?;
// 4. Register models (as shown earlier)
// 5. Start serving inference requests
Staking aligns incentives by requiring providers to lock capital. Misbehaving providers (serving incorrect outputs, downtime) can be slashed, reducing their stake and reputation.
Note: Provider registration has no access control. Anyone can register fake providers without staking or identity verification.
Integration with Inference Router
The model registry and provider manager feed the inference router, which selects the optimal provider for each request based on configurable strategies.
use tenzro_model::router::{
InferenceRouter, RoutingStrategy,
};
// Create router with model registry and provider manager
let router = InferenceRouter::new(
registry.clone(),
provider_mgr.clone(),
);
// Route request using price strategy
let provider = router.select_provider(
&model_id,
RoutingStrategy::Cheapest,
)?;
println!("Selected provider: {}", provider.id);
println!("Endpoint: {}", provider.endpoint);
The router abstracts provider selection from the application. Users request inference for a model ID, and the router handles provider discovery, selection, and failover. See the Inference page for details on routing strategies.
RPC Integration
The tenzro-node RPC server exposes model registry functions via JSON-RPC. Applications query models and providers without running a full node.
// RPC methods for model management
// List all models
curl -X POST https://rpc.tenzro.network \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tenzro_listModels",
"params": [],
"id": 1
}'
// Get model details
curl -X POST https://rpc.tenzro.network \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tenzro_getModel",
"params": ["0x1234..."],
"id": 1
}'
// Register model (requires provider auth)
curl -X POST https://rpc.tenzro.network \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tenzro_registerModel",
"params": [{...model_info...}],
"id": 1
}'
The MCP server on port 3001 also exposes a list_models tool, enabling AI agents to discover available models programmatically.