Tenzro Cortex
Tenzro Cortex is a reasoning primitive that wraps OpenMythos-style Recurrent-Depth Transformers (RDT) with Mixture-of-Experts (MoE) routing. Unlike standard token-by-token inference, Cortex exposes loop depth as a first-class, on-chain billable resource — each recurrent reasoning step is metered, priced, and cryptographically receipted.
What Is Recurrent-Depth Reasoning?
Traditional transformers perform fixed-depth forward passes. OpenMythos-family architectures introduce a Prelude → Recurrent Block → Coda structure where the recurrent block can iterate N times per token, allowing the model to allocate adaptive compute per problem.
Prelude
Initial embedding and context projection. Runs once per token.
Recurrent Block
Sparse MoE with MLA/GQA attention. Loops N times with latent-state continuation. N is bounded by the reasoning tier.
Coda
Final projection to output distribution. Runs once after the recurrent block terminates.
Reasoning Tiers
Clients select a tier when submitting a Cortex request. Each tier has bounds on loop depth and a corresponding TNZO price schedule:
pub enum ReasoningTier {
Fast, // 1-4 loops, cheapest
Standard, // 4-8 loops
Deep, // 8-16 loops
Maximum, // 16-32 loops, premium
}Cortex Workers
A Cortex Worker is a node that advertises recurrent-depth capacity to the gossip topic tenzro/cortex/1.0.0. Workers run a Python sidecar hosting the RDT model and a Rust coordinator that meters loops, signs receipts, and routes payment.
Register a Worker (JSON-RPC)
curl -X POST https://rpc.tenzro.network -H "Content-Type: application/json" -d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tenzro_registerCortexWorker",
"params": [{
"model_id": "mythos-70b-rdt",
"sidecar_url": "http://127.0.0.1:8765",
"arch": "rdt-moe",
"max_loops": 32,
"moe_experts": 64,
"experts_per_token": 2,
"attn_type": "mla",
"worker_did": "did:tenzro:machine:..."
}]
}'CLI
tenzro cortex register \
--model-id mythos-70b-rdt \
--sidecar-url http://127.0.0.1:8765 \
--max-loops 32 \
--moe-experts 64
tenzro cortex list
tenzro cortex reason --model-id mythos-70b-rdt --input-hex 0x... --tier deepSigned Receipts
Every Cortex inference produces a signed CortexReceipt containing:
loops_used— actual recurrent depth consumedexperts_activated— number of MoE experts engagedweights_hash— SHA-256 of the served model weightsruntime_hash— SHA-256 of the runtime binarytee_attestation(optional) — if served from a TEEsignature— Ed25519 signature by the worker DID
Receipts feed into settlement: settle_cortex_payment() routes TNZO from the client to the worker treasury with a network commission.
MCP & SDK
The cortex_reason MCP tool is available on the main Tenzro MCP server. Both the Rust and TypeScript SDKs expose CortexClient for programmatic access.
Why This Matters
- Adaptive compute without Chain-of-Thought token inflation
- Decentralized reasoning marketplace — providers compete on loop depth, MoE width, and TEE premium
- Verifiable intelligence — every reasoning step is cryptographically attested
- Agent uplift — autonomous agents can dial reasoning depth based on task difficulty and available budget